Test Report: KVM_Linux_crio 16890

                    
                      dc702cb3cbb2bfe371541339d66d19e451f60279:2023-07-17:30187
                    
                

Test fail (27/288)

Order failed test Duration
25 TestAddons/parallel/Ingress 161.55
36 TestAddons/StoppedEnableDisable 154.67
47 TestErrorSpam/setup 50.51
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 169.81
200 TestMultiNode/serial/PingHostFrom2Pods 3.36
206 TestMultiNode/serial/RestartKeepsNodes 684.85
208 TestMultiNode/serial/StopMultiNode 143.23
215 TestPreload 277.87
221 TestRunningBinaryUpgrade 222.79
228 TestStoppedBinaryUpgrade/Upgrade 296.73
229 TestPause/serial/SecondStartNoReconfiguration 69.19
267 TestStartStop/group/old-k8s-version/serial/Stop 140.35
272 TestStartStop/group/no-preload/serial/Stop 140.13
275 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.56
287 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.39
291 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
295 TestStartStop/group/embed-certs/serial/Stop 140.37
296 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.71
299 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.71
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.77
301 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.37
302 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 348.7
303 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 423.05
304 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 143.86
306 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 290.23
x
+
TestAddons/parallel/Ingress (161.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-962955 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-962955 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-962955 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [20b6362d-9006-45c4-8ad5-31a8d7b2269d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [20b6362d-9006-45c4-8ad5-31a8d7b2269d] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.053060654s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-962955 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-962955 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.111741515s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-962955 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:262: (dbg) Done: kubectl --context addons-962955 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.018455065s)
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-962955 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.215
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-962955 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-962955 addons disable ingress-dns --alsologtostderr -v=1: (1.78284158s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-962955 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-962955 addons disable ingress --alsologtostderr -v=1: (8.074711386s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-962955 -n addons-962955
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-962955 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-962955 logs -n 25: (1.377309326s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-435458 | jenkins | v1.30.1 | 17 Jul 23 18:43 UTC |                     |
	|         | -p download-only-435458        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-435458 | jenkins | v1.30.1 | 17 Jul 23 18:43 UTC |                     |
	|         | -p download-only-435458        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 17 Jul 23 18:43 UTC | 17 Jul 23 18:43 UTC |
	| delete  | -p download-only-435458        | download-only-435458 | jenkins | v1.30.1 | 17 Jul 23 18:43 UTC | 17 Jul 23 18:43 UTC |
	| delete  | -p download-only-435458        | download-only-435458 | jenkins | v1.30.1 | 17 Jul 23 18:43 UTC | 17 Jul 23 18:43 UTC |
	| start   | --download-only -p             | binary-mirror-877913 | jenkins | v1.30.1 | 17 Jul 23 18:43 UTC |                     |
	|         | binary-mirror-877913           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45753         |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-877913        | binary-mirror-877913 | jenkins | v1.30.1 | 17 Jul 23 18:43 UTC | 17 Jul 23 18:43 UTC |
	| start   | -p addons-962955               | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:43 UTC | 17 Jul 23 18:46 UTC |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	|         | --addons=helm-tiller           |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:46 UTC | 17 Jul 23 18:46 UTC |
	|         | -p addons-962955               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:46 UTC | 17 Jul 23 18:46 UTC |
	|         | addons-962955                  |                      |         |         |                     |                     |
	| addons  | addons-962955 addons disable   | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:46 UTC | 17 Jul 23 18:46 UTC |
	|         | helm-tiller --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| ip      | addons-962955 ip               | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:46 UTC | 17 Jul 23 18:46 UTC |
	| addons  | addons-962955 addons disable   | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:46 UTC | 17 Jul 23 18:46 UTC |
	|         | registry --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:46 UTC | 17 Jul 23 18:46 UTC |
	|         | addons-962955                  |                      |         |         |                     |                     |
	| addons  | addons-962955 addons           | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:46 UTC | 17 Jul 23 18:46 UTC |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| ssh     | addons-962955 ssh curl -s      | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:46 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                      |         |         |                     |                     |
	|         | nginx.example.com'             |                      |         |         |                     |                     |
	| addons  | addons-962955 addons           | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:46 UTC | 17 Jul 23 18:46 UTC |
	|         | disable csi-hostpath-driver    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-962955 addons           | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:46 UTC | 17 Jul 23 18:47 UTC |
	|         | disable volumesnapshots        |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| ip      | addons-962955 ip               | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:48 UTC | 17 Jul 23 18:48 UTC |
	| addons  | addons-962955 addons disable   | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:48 UTC | 17 Jul 23 18:48 UTC |
	|         | ingress-dns --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-962955 addons disable   | addons-962955        | jenkins | v1.30.1 | 17 Jul 23 18:48 UTC | 17 Jul 23 18:48 UTC |
	|         | ingress --alsologtostderr -v=1 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 18:43:31
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:43:31.835255 1069276 out.go:296] Setting OutFile to fd 1 ...
	I0717 18:43:31.835953 1069276 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:43:31.835970 1069276 out.go:309] Setting ErrFile to fd 2...
	I0717 18:43:31.835977 1069276 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:43:31.836446 1069276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 18:43:31.837701 1069276 out.go:303] Setting JSON to false
	I0717 18:43:31.839118 1069276 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":12363,"bootTime":1689607049,"procs":618,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:43:31.839196 1069276 start.go:138] virtualization: kvm guest
	I0717 18:43:31.842701 1069276 out.go:177] * [addons-962955] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:43:31.844711 1069276 notify.go:220] Checking for updates...
	I0717 18:43:31.844725 1069276 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 18:43:31.846824 1069276 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:43:31.848655 1069276 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 18:43:31.850587 1069276 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 18:43:31.852543 1069276 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:43:31.854358 1069276 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:43:31.856523 1069276 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 18:43:31.890581 1069276 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 18:43:31.892466 1069276 start.go:298] selected driver: kvm2
	I0717 18:43:31.892495 1069276 start.go:880] validating driver "kvm2" against <nil>
	I0717 18:43:31.892534 1069276 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:43:31.893444 1069276 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:43:31.893589 1069276 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:43:31.909653 1069276 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0717 18:43:31.909726 1069276 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 18:43:31.909985 1069276 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:43:31.910041 1069276 cni.go:84] Creating CNI manager for ""
	I0717 18:43:31.910067 1069276 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:43:31.910080 1069276 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 18:43:31.910098 1069276 start_flags.go:319] config:
	{Name:addons-962955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-962955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 18:43:31.910275 1069276 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:43:31.912775 1069276 out.go:177] * Starting control plane node addons-962955 in cluster addons-962955
	I0717 18:43:31.914448 1069276 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 18:43:31.914517 1069276 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 18:43:31.914534 1069276 cache.go:57] Caching tarball of preloaded images
	I0717 18:43:31.914639 1069276 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:43:31.914651 1069276 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 18:43:31.915000 1069276 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/config.json ...
	I0717 18:43:31.915048 1069276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/config.json: {Name:mke4d7aa956c2e910146c575f860426887092ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:43:31.915210 1069276 start.go:365] acquiring machines lock for addons-962955: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:43:31.915290 1069276 start.go:369] acquired machines lock for "addons-962955" in 62.115µs
	I0717 18:43:31.915317 1069276 start.go:93] Provisioning new machine with config: &{Name:addons-962955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-962955
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:43:31.915404 1069276 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 18:43:31.917645 1069276 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 18:43:31.917835 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:43:31.917887 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:43:31.933265 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0717 18:43:31.933844 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:43:31.934514 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:43:31.934543 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:43:31.934882 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:43:31.935122 1069276 main.go:141] libmachine: (addons-962955) Calling .GetMachineName
	I0717 18:43:31.935319 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:43:31.935519 1069276 start.go:159] libmachine.API.Create for "addons-962955" (driver="kvm2")
	I0717 18:43:31.935554 1069276 client.go:168] LocalClient.Create starting
	I0717 18:43:31.935609 1069276 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem
	I0717 18:43:32.158915 1069276 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem
	I0717 18:43:32.297525 1069276 main.go:141] libmachine: Running pre-create checks...
	I0717 18:43:32.297584 1069276 main.go:141] libmachine: (addons-962955) Calling .PreCreateCheck
	I0717 18:43:32.298161 1069276 main.go:141] libmachine: (addons-962955) Calling .GetConfigRaw
	I0717 18:43:32.298746 1069276 main.go:141] libmachine: Creating machine...
	I0717 18:43:32.298768 1069276 main.go:141] libmachine: (addons-962955) Calling .Create
	I0717 18:43:32.298963 1069276 main.go:141] libmachine: (addons-962955) Creating KVM machine...
	I0717 18:43:32.300333 1069276 main.go:141] libmachine: (addons-962955) DBG | found existing default KVM network
	I0717 18:43:32.301080 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:32.300899 1069298 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f130}
	I0717 18:43:32.307669 1069276 main.go:141] libmachine: (addons-962955) DBG | trying to create private KVM network mk-addons-962955 192.168.39.0/24...
	I0717 18:43:32.385753 1069276 main.go:141] libmachine: (addons-962955) DBG | private KVM network mk-addons-962955 192.168.39.0/24 created
	I0717 18:43:32.385790 1069276 main.go:141] libmachine: (addons-962955) Setting up store path in /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955 ...
	I0717 18:43:32.385804 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:32.385699 1069298 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 18:43:32.385831 1069276 main.go:141] libmachine: (addons-962955) Building disk image from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 18:43:32.385850 1069276 main.go:141] libmachine: (addons-962955) Downloading /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0717 18:43:32.631923 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:32.631631 1069298 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa...
	I0717 18:43:32.688219 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:32.688033 1069298 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/addons-962955.rawdisk...
	I0717 18:43:32.688276 1069276 main.go:141] libmachine: (addons-962955) DBG | Writing magic tar header
	I0717 18:43:32.688292 1069276 main.go:141] libmachine: (addons-962955) DBG | Writing SSH key tar header
	I0717 18:43:32.688305 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:32.688156 1069298 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955 ...
	I0717 18:43:32.688318 1069276 main.go:141] libmachine: (addons-962955) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955 (perms=drwx------)
	I0717 18:43:32.688373 1069276 main.go:141] libmachine: (addons-962955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955
	I0717 18:43:32.688406 1069276 main.go:141] libmachine: (addons-962955) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:43:32.688417 1069276 main.go:141] libmachine: (addons-962955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines
	I0717 18:43:32.688429 1069276 main.go:141] libmachine: (addons-962955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 18:43:32.688435 1069276 main.go:141] libmachine: (addons-962955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725
	I0717 18:43:32.688443 1069276 main.go:141] libmachine: (addons-962955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:43:32.688449 1069276 main.go:141] libmachine: (addons-962955) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:43:32.688457 1069276 main.go:141] libmachine: (addons-962955) DBG | Checking permissions on dir: /home
	I0717 18:43:32.688467 1069276 main.go:141] libmachine: (addons-962955) DBG | Skipping /home - not owner
	I0717 18:43:32.688482 1069276 main.go:141] libmachine: (addons-962955) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube (perms=drwxr-xr-x)
	I0717 18:43:32.688493 1069276 main.go:141] libmachine: (addons-962955) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725 (perms=drwxrwxr-x)
	I0717 18:43:32.688564 1069276 main.go:141] libmachine: (addons-962955) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:43:32.688600 1069276 main.go:141] libmachine: (addons-962955) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:43:32.688614 1069276 main.go:141] libmachine: (addons-962955) Creating domain...
	I0717 18:43:32.689862 1069276 main.go:141] libmachine: (addons-962955) define libvirt domain using xml: 
	I0717 18:43:32.689896 1069276 main.go:141] libmachine: (addons-962955) <domain type='kvm'>
	I0717 18:43:32.689908 1069276 main.go:141] libmachine: (addons-962955)   <name>addons-962955</name>
	I0717 18:43:32.689927 1069276 main.go:141] libmachine: (addons-962955)   <memory unit='MiB'>4000</memory>
	I0717 18:43:32.689974 1069276 main.go:141] libmachine: (addons-962955)   <vcpu>2</vcpu>
	I0717 18:43:32.690003 1069276 main.go:141] libmachine: (addons-962955)   <features>
	I0717 18:43:32.690023 1069276 main.go:141] libmachine: (addons-962955)     <acpi/>
	I0717 18:43:32.690038 1069276 main.go:141] libmachine: (addons-962955)     <apic/>
	I0717 18:43:32.690052 1069276 main.go:141] libmachine: (addons-962955)     <pae/>
	I0717 18:43:32.690068 1069276 main.go:141] libmachine: (addons-962955)     
	I0717 18:43:32.690082 1069276 main.go:141] libmachine: (addons-962955)   </features>
	I0717 18:43:32.690093 1069276 main.go:141] libmachine: (addons-962955)   <cpu mode='host-passthrough'>
	I0717 18:43:32.690104 1069276 main.go:141] libmachine: (addons-962955)   
	I0717 18:43:32.690120 1069276 main.go:141] libmachine: (addons-962955)   </cpu>
	I0717 18:43:32.690131 1069276 main.go:141] libmachine: (addons-962955)   <os>
	I0717 18:43:32.690149 1069276 main.go:141] libmachine: (addons-962955)     <type>hvm</type>
	I0717 18:43:32.690163 1069276 main.go:141] libmachine: (addons-962955)     <boot dev='cdrom'/>
	I0717 18:43:32.690175 1069276 main.go:141] libmachine: (addons-962955)     <boot dev='hd'/>
	I0717 18:43:32.690189 1069276 main.go:141] libmachine: (addons-962955)     <bootmenu enable='no'/>
	I0717 18:43:32.690205 1069276 main.go:141] libmachine: (addons-962955)   </os>
	I0717 18:43:32.690242 1069276 main.go:141] libmachine: (addons-962955)   <devices>
	I0717 18:43:32.690271 1069276 main.go:141] libmachine: (addons-962955)     <disk type='file' device='cdrom'>
	I0717 18:43:32.690289 1069276 main.go:141] libmachine: (addons-962955)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/boot2docker.iso'/>
	I0717 18:43:32.690310 1069276 main.go:141] libmachine: (addons-962955)       <target dev='hdc' bus='scsi'/>
	I0717 18:43:32.690330 1069276 main.go:141] libmachine: (addons-962955)       <readonly/>
	I0717 18:43:32.690347 1069276 main.go:141] libmachine: (addons-962955)     </disk>
	I0717 18:43:32.690370 1069276 main.go:141] libmachine: (addons-962955)     <disk type='file' device='disk'>
	I0717 18:43:32.690386 1069276 main.go:141] libmachine: (addons-962955)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:43:32.690404 1069276 main.go:141] libmachine: (addons-962955)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/addons-962955.rawdisk'/>
	I0717 18:43:32.690433 1069276 main.go:141] libmachine: (addons-962955)       <target dev='hda' bus='virtio'/>
	I0717 18:43:32.690446 1069276 main.go:141] libmachine: (addons-962955)     </disk>
	I0717 18:43:32.690456 1069276 main.go:141] libmachine: (addons-962955)     <interface type='network'>
	I0717 18:43:32.690487 1069276 main.go:141] libmachine: (addons-962955)       <source network='mk-addons-962955'/>
	I0717 18:43:32.690498 1069276 main.go:141] libmachine: (addons-962955)       <model type='virtio'/>
	I0717 18:43:32.690508 1069276 main.go:141] libmachine: (addons-962955)     </interface>
	I0717 18:43:32.690523 1069276 main.go:141] libmachine: (addons-962955)     <interface type='network'>
	I0717 18:43:32.690536 1069276 main.go:141] libmachine: (addons-962955)       <source network='default'/>
	I0717 18:43:32.690544 1069276 main.go:141] libmachine: (addons-962955)       <model type='virtio'/>
	I0717 18:43:32.690554 1069276 main.go:141] libmachine: (addons-962955)     </interface>
	I0717 18:43:32.690566 1069276 main.go:141] libmachine: (addons-962955)     <serial type='pty'>
	I0717 18:43:32.690579 1069276 main.go:141] libmachine: (addons-962955)       <target port='0'/>
	I0717 18:43:32.690591 1069276 main.go:141] libmachine: (addons-962955)     </serial>
	I0717 18:43:32.690610 1069276 main.go:141] libmachine: (addons-962955)     <console type='pty'>
	I0717 18:43:32.690634 1069276 main.go:141] libmachine: (addons-962955)       <target type='serial' port='0'/>
	I0717 18:43:32.690651 1069276 main.go:141] libmachine: (addons-962955)     </console>
	I0717 18:43:32.690666 1069276 main.go:141] libmachine: (addons-962955)     <rng model='virtio'>
	I0717 18:43:32.690683 1069276 main.go:141] libmachine: (addons-962955)       <backend model='random'>/dev/random</backend>
	I0717 18:43:32.690697 1069276 main.go:141] libmachine: (addons-962955)     </rng>
	I0717 18:43:32.690711 1069276 main.go:141] libmachine: (addons-962955)     
	I0717 18:43:32.690727 1069276 main.go:141] libmachine: (addons-962955)     
	I0717 18:43:32.690742 1069276 main.go:141] libmachine: (addons-962955)   </devices>
	I0717 18:43:32.690755 1069276 main.go:141] libmachine: (addons-962955) </domain>
	I0717 18:43:32.690775 1069276 main.go:141] libmachine: (addons-962955) 
	I0717 18:43:32.696225 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:1a:5d:7b in network default
	I0717 18:43:32.697099 1069276 main.go:141] libmachine: (addons-962955) Ensuring networks are active...
	I0717 18:43:32.697129 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:32.697917 1069276 main.go:141] libmachine: (addons-962955) Ensuring network default is active
	I0717 18:43:32.698249 1069276 main.go:141] libmachine: (addons-962955) Ensuring network mk-addons-962955 is active
	I0717 18:43:32.698761 1069276 main.go:141] libmachine: (addons-962955) Getting domain xml...
	I0717 18:43:32.699457 1069276 main.go:141] libmachine: (addons-962955) Creating domain...
	I0717 18:43:33.993254 1069276 main.go:141] libmachine: (addons-962955) Waiting to get IP...
	I0717 18:43:33.994266 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:33.994795 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:33.994850 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:33.994767 1069298 retry.go:31] will retry after 291.128594ms: waiting for machine to come up
	I0717 18:43:34.287489 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:34.287961 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:34.288001 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:34.287925 1069298 retry.go:31] will retry after 345.811783ms: waiting for machine to come up
	I0717 18:43:34.635789 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:34.636275 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:34.636305 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:34.636227 1069298 retry.go:31] will retry after 355.215684ms: waiting for machine to come up
	I0717 18:43:34.992740 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:34.993318 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:34.993351 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:34.993266 1069298 retry.go:31] will retry after 501.344895ms: waiting for machine to come up
	I0717 18:43:35.496309 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:35.496760 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:35.496788 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:35.496721 1069298 retry.go:31] will retry after 617.687908ms: waiting for machine to come up
	I0717 18:43:36.115761 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:36.116423 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:36.116451 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:36.116371 1069298 retry.go:31] will retry after 925.476986ms: waiting for machine to come up
	I0717 18:43:37.043492 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:37.044026 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:37.044053 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:37.043944 1069298 retry.go:31] will retry after 829.142066ms: waiting for machine to come up
	I0717 18:43:37.874491 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:37.874987 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:37.875016 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:37.874929 1069298 retry.go:31] will retry after 998.878722ms: waiting for machine to come up
	I0717 18:43:38.875524 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:38.876128 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:38.876157 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:38.876065 1069298 retry.go:31] will retry after 1.826757973s: waiting for machine to come up
	I0717 18:43:40.705490 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:40.706271 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:40.706325 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:40.706184 1069298 retry.go:31] will retry after 1.548901088s: waiting for machine to come up
	I0717 18:43:42.256746 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:42.257440 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:42.257476 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:42.257379 1069298 retry.go:31] will retry after 2.698860914s: waiting for machine to come up
	I0717 18:43:44.959826 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:44.960563 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:44.960609 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:44.960500 1069298 retry.go:31] will retry after 3.324486486s: waiting for machine to come up
	I0717 18:43:48.286545 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:48.287082 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:48.287115 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:48.287026 1069298 retry.go:31] will retry after 2.786959947s: waiting for machine to come up
	I0717 18:43:51.077282 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:51.077707 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find current IP address of domain addons-962955 in network mk-addons-962955
	I0717 18:43:51.077738 1069276 main.go:141] libmachine: (addons-962955) DBG | I0717 18:43:51.077660 1069298 retry.go:31] will retry after 4.578324942s: waiting for machine to come up
	I0717 18:43:55.660663 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:55.661087 1069276 main.go:141] libmachine: (addons-962955) Found IP for machine: 192.168.39.215
	I0717 18:43:55.661142 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has current primary IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:55.661149 1069276 main.go:141] libmachine: (addons-962955) Reserving static IP address...
	I0717 18:43:55.661627 1069276 main.go:141] libmachine: (addons-962955) DBG | unable to find host DHCP lease matching {name: "addons-962955", mac: "52:54:00:e9:53:85", ip: "192.168.39.215"} in network mk-addons-962955
	I0717 18:43:55.750001 1069276 main.go:141] libmachine: (addons-962955) DBG | Getting to WaitForSSH function...
	I0717 18:43:55.750062 1069276 main.go:141] libmachine: (addons-962955) Reserved static IP address: 192.168.39.215
	I0717 18:43:55.750081 1069276 main.go:141] libmachine: (addons-962955) Waiting for SSH to be available...
	I0717 18:43:55.752838 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:55.753379 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:55.753418 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:55.753535 1069276 main.go:141] libmachine: (addons-962955) DBG | Using SSH client type: external
	I0717 18:43:55.753575 1069276 main.go:141] libmachine: (addons-962955) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa (-rw-------)
	I0717 18:43:55.753614 1069276 main.go:141] libmachine: (addons-962955) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:43:55.753633 1069276 main.go:141] libmachine: (addons-962955) DBG | About to run SSH command:
	I0717 18:43:55.753646 1069276 main.go:141] libmachine: (addons-962955) DBG | exit 0
	I0717 18:43:55.846671 1069276 main.go:141] libmachine: (addons-962955) DBG | SSH cmd err, output: <nil>: 
	I0717 18:43:55.847030 1069276 main.go:141] libmachine: (addons-962955) KVM machine creation complete!
	I0717 18:43:55.847326 1069276 main.go:141] libmachine: (addons-962955) Calling .GetConfigRaw
	I0717 18:43:55.847891 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:43:55.848147 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:43:55.848330 1069276 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:43:55.848349 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:43:55.849692 1069276 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:43:55.849706 1069276 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:43:55.849714 1069276 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:43:55.849720 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:43:55.852388 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:55.852753 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:55.852791 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:55.852928 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:43:55.853141 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:55.853325 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:55.853454 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:43:55.853696 1069276 main.go:141] libmachine: Using SSH client type: native
	I0717 18:43:55.854345 1069276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0717 18:43:55.854364 1069276 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:43:55.968978 1069276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:43:55.969015 1069276 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:43:55.969028 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:43:55.972433 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:55.972888 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:55.972926 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:55.973106 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:43:55.973335 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:55.973506 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:55.973671 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:43:55.973863 1069276 main.go:141] libmachine: Using SSH client type: native
	I0717 18:43:55.974273 1069276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0717 18:43:55.974285 1069276 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:43:56.091160 1069276 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0717 18:43:56.091268 1069276 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:43:56.091285 1069276 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:43:56.091299 1069276 main.go:141] libmachine: (addons-962955) Calling .GetMachineName
	I0717 18:43:56.091603 1069276 buildroot.go:166] provisioning hostname "addons-962955"
	I0717 18:43:56.091636 1069276 main.go:141] libmachine: (addons-962955) Calling .GetMachineName
	I0717 18:43:56.091850 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:43:56.094779 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:56.095187 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:56.095230 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:56.095392 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:43:56.095611 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:56.095786 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:56.095944 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:43:56.096135 1069276 main.go:141] libmachine: Using SSH client type: native
	I0717 18:43:56.096544 1069276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0717 18:43:56.096557 1069276 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-962955 && echo "addons-962955" | sudo tee /etc/hostname
	I0717 18:43:56.227913 1069276 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-962955
	
	I0717 18:43:56.227949 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:43:56.231252 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:56.231743 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:56.231774 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:56.231925 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:43:56.232185 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:56.232424 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:56.232617 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:43:56.232849 1069276 main.go:141] libmachine: Using SSH client type: native
	I0717 18:43:56.233477 1069276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0717 18:43:56.233506 1069276 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-962955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-962955/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-962955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:43:56.360057 1069276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:43:56.360096 1069276 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 18:43:56.360120 1069276 buildroot.go:174] setting up certificates
	I0717 18:43:56.360143 1069276 provision.go:83] configureAuth start
	I0717 18:43:56.360153 1069276 main.go:141] libmachine: (addons-962955) Calling .GetMachineName
	I0717 18:43:56.360630 1069276 main.go:141] libmachine: (addons-962955) Calling .GetIP
	I0717 18:43:56.364531 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:56.364973 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:56.365026 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:56.365238 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:43:56.368047 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:56.368525 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:56.368578 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:56.368724 1069276 provision.go:138] copyHostCerts
	I0717 18:43:56.368811 1069276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 18:43:56.368962 1069276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 18:43:56.369065 1069276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 18:43:56.369147 1069276 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.addons-962955 san=[192.168.39.215 192.168.39.215 localhost 127.0.0.1 minikube addons-962955]
	I0717 18:43:56.914278 1069276 provision.go:172] copyRemoteCerts
	I0717 18:43:56.914367 1069276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:43:56.914396 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:43:56.917638 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:56.917972 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:56.918004 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:56.918225 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:43:56.918479 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:56.918788 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:43:56.918964 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:43:57.008291 1069276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:43:57.034277 1069276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 18:43:57.060279 1069276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:43:57.086521 1069276 provision.go:86] duration metric: configureAuth took 726.361842ms
	I0717 18:43:57.086556 1069276 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:43:57.086787 1069276 config.go:182] Loaded profile config "addons-962955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 18:43:57.086889 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:43:57.089799 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.090240 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:57.090282 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.090550 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:43:57.090837 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:57.091034 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:57.091200 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:43:57.091508 1069276 main.go:141] libmachine: Using SSH client type: native
	I0717 18:43:57.091906 1069276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0717 18:43:57.091925 1069276 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:43:57.433343 1069276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:43:57.433374 1069276 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:43:57.433397 1069276 main.go:141] libmachine: (addons-962955) Calling .GetURL
	I0717 18:43:57.435025 1069276 main.go:141] libmachine: (addons-962955) DBG | Using libvirt version 6000000
	I0717 18:43:57.437014 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.437703 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:57.437741 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.437970 1069276 main.go:141] libmachine: Docker is up and running!
	I0717 18:43:57.437994 1069276 main.go:141] libmachine: Reticulating splines...
	I0717 18:43:57.438004 1069276 client.go:171] LocalClient.Create took 25.502435608s
	I0717 18:43:57.438040 1069276 start.go:167] duration metric: libmachine.API.Create for "addons-962955" took 25.502521363s
	I0717 18:43:57.438050 1069276 start.go:300] post-start starting for "addons-962955" (driver="kvm2")
	I0717 18:43:57.438063 1069276 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:43:57.438090 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:43:57.438438 1069276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:43:57.438472 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:43:57.441177 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.441638 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:57.441674 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.441873 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:43:57.442171 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:57.442355 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:43:57.442553 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:43:57.533034 1069276 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:43:57.538126 1069276 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 18:43:57.538168 1069276 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 18:43:57.538275 1069276 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 18:43:57.538324 1069276 start.go:303] post-start completed in 100.264699ms
	I0717 18:43:57.538375 1069276 main.go:141] libmachine: (addons-962955) Calling .GetConfigRaw
	I0717 18:43:57.539087 1069276 main.go:141] libmachine: (addons-962955) Calling .GetIP
	I0717 18:43:57.542223 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.542649 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:57.542690 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.543019 1069276 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/config.json ...
	I0717 18:43:57.543218 1069276 start.go:128] duration metric: createHost completed in 25.627803603s
	I0717 18:43:57.543245 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:43:57.547272 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.547854 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:57.547894 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.548190 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:43:57.548466 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:57.548669 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:57.548838 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:43:57.549022 1069276 main.go:141] libmachine: Using SSH client type: native
	I0717 18:43:57.549460 1069276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0717 18:43:57.549480 1069276 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:43:57.666848 1069276 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689619437.647221584
	
	I0717 18:43:57.666878 1069276 fix.go:206] guest clock: 1689619437.647221584
	I0717 18:43:57.666886 1069276 fix.go:219] Guest: 2023-07-17 18:43:57.647221584 +0000 UTC Remote: 2023-07-17 18:43:57.543231434 +0000 UTC m=+25.746600929 (delta=103.99015ms)
	I0717 18:43:57.666909 1069276 fix.go:190] guest clock delta is within tolerance: 103.99015ms
	I0717 18:43:57.666915 1069276 start.go:83] releasing machines lock for "addons-962955", held for 25.751614822s
	I0717 18:43:57.666935 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:43:57.667270 1069276 main.go:141] libmachine: (addons-962955) Calling .GetIP
	I0717 18:43:57.670312 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.670641 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:57.670689 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.670816 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:43:57.671412 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:43:57.671633 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:43:57.671747 1069276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:43:57.671815 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:43:57.671894 1069276 ssh_runner.go:195] Run: cat /version.json
	I0717 18:43:57.671927 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:43:57.674642 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.674825 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.675015 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:57.675044 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.675158 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:43:57.675192 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:57.675225 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:57.675400 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:57.675436 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:43:57.675602 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:43:57.675612 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:43:57.675783 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:43:57.675798 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:43:57.675924 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	W0717 18:43:57.783171 1069276 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 18:43:57.783269 1069276 ssh_runner.go:195] Run: systemctl --version
	I0717 18:43:57.789604 1069276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:43:57.967479 1069276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:43:57.973907 1069276 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:43:57.974002 1069276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:43:57.991229 1069276 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:43:57.991275 1069276 start.go:469] detecting cgroup driver to use...
	I0717 18:43:57.991369 1069276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:43:58.007199 1069276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:43:58.022602 1069276 docker.go:196] disabling cri-docker service (if available) ...
	I0717 18:43:58.022670 1069276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:43:58.037985 1069276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:43:58.055534 1069276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:43:58.177944 1069276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:43:58.307019 1069276 docker.go:212] disabling docker service ...
	I0717 18:43:58.307119 1069276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:43:58.322468 1069276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:43:58.336038 1069276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:43:58.453229 1069276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:43:58.568259 1069276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:43:58.581472 1069276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:43:58.599687 1069276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:43:58.599772 1069276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:43:58.610029 1069276 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:43:58.610108 1069276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:43:58.620553 1069276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:43:58.630921 1069276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:43:58.641248 1069276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:43:58.651978 1069276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:43:58.661198 1069276 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:43:58.661276 1069276 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:43:58.675627 1069276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:43:58.685472 1069276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:43:58.806830 1069276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:43:58.992906 1069276 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:43:58.993021 1069276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:43:58.999616 1069276 start.go:537] Will wait 60s for crictl version
	I0717 18:43:58.999705 1069276 ssh_runner.go:195] Run: which crictl
	I0717 18:43:59.004047 1069276 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:43:59.037622 1069276 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 18:43:59.037723 1069276 ssh_runner.go:195] Run: crio --version
	I0717 18:43:59.088324 1069276 ssh_runner.go:195] Run: crio --version
	I0717 18:43:59.144779 1069276 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 18:43:59.146762 1069276 main.go:141] libmachine: (addons-962955) Calling .GetIP
	I0717 18:43:59.150474 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:59.150839 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:43:59.150871 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:43:59.151180 1069276 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:43:59.156108 1069276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:43:59.170395 1069276 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 18:43:59.170460 1069276 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:43:59.200915 1069276 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 18:43:59.200998 1069276 ssh_runner.go:195] Run: which lz4
	I0717 18:43:59.205720 1069276 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:43:59.210668 1069276 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:43:59.210715 1069276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 18:44:01.007643 1069276 crio.go:444] Took 1.801966 seconds to copy over tarball
	I0717 18:44:01.007741 1069276 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:44:04.347603 1069276 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.339826045s)
	I0717 18:44:04.347638 1069276 crio.go:451] Took 3.339957 seconds to extract the tarball
	I0717 18:44:04.347653 1069276 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:44:04.390553 1069276 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:44:04.462431 1069276 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 18:44:04.462463 1069276 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:44:04.462538 1069276 ssh_runner.go:195] Run: crio config
	I0717 18:44:04.531534 1069276 cni.go:84] Creating CNI manager for ""
	I0717 18:44:04.531566 1069276 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:44:04.531592 1069276 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 18:44:04.531615 1069276 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-962955 NodeName:addons-962955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:44:04.531767 1069276 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-962955"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:44:04.531843 1069276 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-962955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-962955 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 18:44:04.531903 1069276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 18:44:04.541890 1069276 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:44:04.541977 1069276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:44:04.551417 1069276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0717 18:44:04.569269 1069276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:44:04.586525 1069276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0717 18:44:04.604351 1069276 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I0717 18:44:04.609623 1069276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:44:04.624140 1069276 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955 for IP: 192.168.39.215
	I0717 18:44:04.624199 1069276 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:44:04.624369 1069276 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 18:44:04.679830 1069276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt ...
	I0717 18:44:04.679867 1069276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt: {Name:mkcf39207ea7fae6392bcef6555df69a57efc86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:44:04.680045 1069276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key ...
	I0717 18:44:04.680057 1069276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key: {Name:mk19afdb6a92dccffda4f57e5a921ee2679370ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:44:04.680123 1069276 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 18:44:04.872234 1069276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt ...
	I0717 18:44:04.872269 1069276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt: {Name:mk7d936a1629737f983bbb225ecfaec9cf331533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:44:04.872448 1069276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key ...
	I0717 18:44:04.872462 1069276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key: {Name:mkafa78f15151c3e572e1e1de9fed2669bdf4faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:44:04.872569 1069276 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.key
	I0717 18:44:04.872583 1069276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt with IP's: []
	I0717 18:44:04.926777 1069276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt ...
	I0717 18:44:04.926816 1069276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: {Name:mk5fee6380d19d9dd04d18e138849774936d6de6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:44:04.926992 1069276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.key ...
	I0717 18:44:04.927004 1069276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.key: {Name:mk8efe3cd26e19b5d2d82ddd3c364d95828ca23c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:44:04.927066 1069276 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/apiserver.key.c3c8e5aa
	I0717 18:44:04.927083 1069276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/apiserver.crt.c3c8e5aa with IP's: [192.168.39.215 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 18:44:05.172418 1069276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/apiserver.crt.c3c8e5aa ...
	I0717 18:44:05.172458 1069276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/apiserver.crt.c3c8e5aa: {Name:mkd5ae9054ebac5102e284c3993ad2e27152d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:44:05.172649 1069276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/apiserver.key.c3c8e5aa ...
	I0717 18:44:05.172662 1069276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/apiserver.key.c3c8e5aa: {Name:mkcd3dfe380b2ee09beb79198ec01ea42622de6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:44:05.172736 1069276 certs.go:337] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/apiserver.crt.c3c8e5aa -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/apiserver.crt
	I0717 18:44:05.172801 1069276 certs.go:341] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/apiserver.key.c3c8e5aa -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/apiserver.key
	I0717 18:44:05.172847 1069276 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/proxy-client.key
	I0717 18:44:05.172860 1069276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/proxy-client.crt with IP's: []
	I0717 18:44:05.440293 1069276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/proxy-client.crt ...
	I0717 18:44:05.440330 1069276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/proxy-client.crt: {Name:mkaddbe73f0aaf0d4b86a2ea01077195c7497159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:44:05.440513 1069276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/proxy-client.key ...
	I0717 18:44:05.440534 1069276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/proxy-client.key: {Name:mk00431fd421d2a44315caa6d3ef16e298e3507e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:44:05.440707 1069276 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:44:05.440755 1069276 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:44:05.440777 1069276 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:44:05.440804 1069276 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 18:44:05.441513 1069276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 18:44:05.468541 1069276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 18:44:05.494650 1069276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:44:05.520033 1069276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:44:05.546264 1069276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:44:05.573305 1069276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 18:44:05.598845 1069276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:44:05.623870 1069276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:44:05.650054 1069276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:44:05.675948 1069276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:44:05.695527 1069276 ssh_runner.go:195] Run: openssl version
	I0717 18:44:05.701667 1069276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:44:05.712884 1069276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:44:05.718807 1069276 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:44:05.718878 1069276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:44:05.724973 1069276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:44:05.736236 1069276 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 18:44:05.741105 1069276 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 18:44:05.741170 1069276 kubeadm.go:404] StartCluster: {Name:addons-962955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-962955 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 18:44:05.741262 1069276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:44:05.741315 1069276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:44:05.780525 1069276 cri.go:89] found id: ""
	I0717 18:44:05.780607 1069276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:44:05.790903 1069276 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:44:05.800812 1069276 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:44:05.810655 1069276 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:44:05.810727 1069276 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:44:06.014452 1069276 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:44:19.716013 1069276 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 18:44:19.716109 1069276 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 18:44:19.716249 1069276 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:44:19.716404 1069276 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:44:19.716535 1069276 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:44:19.716625 1069276 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:44:19.719185 1069276 out.go:204]   - Generating certificates and keys ...
	I0717 18:44:19.719305 1069276 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 18:44:19.719382 1069276 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 18:44:19.719459 1069276 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:44:19.719528 1069276 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:44:19.719575 1069276 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:44:19.719645 1069276 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 18:44:19.719693 1069276 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 18:44:19.719790 1069276 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-962955 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0717 18:44:19.719831 1069276 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 18:44:19.719984 1069276 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-962955 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0717 18:44:19.720044 1069276 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:44:19.720099 1069276 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:44:19.720138 1069276 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 18:44:19.720185 1069276 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:44:19.720230 1069276 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:44:19.720275 1069276 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:44:19.720332 1069276 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:44:19.720394 1069276 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:44:19.720497 1069276 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:44:19.720614 1069276 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:44:19.720675 1069276 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 18:44:19.720770 1069276 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:44:19.723733 1069276 out.go:204]   - Booting up control plane ...
	I0717 18:44:19.723832 1069276 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:44:19.723927 1069276 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:44:19.724018 1069276 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:44:19.724107 1069276 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:44:19.724282 1069276 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:44:19.724378 1069276 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.006045 seconds
	I0717 18:44:19.724513 1069276 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:44:19.724667 1069276 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:44:19.724744 1069276 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:44:19.724984 1069276 kubeadm.go:322] [mark-control-plane] Marking the node addons-962955 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:44:19.725052 1069276 kubeadm.go:322] [bootstrap-token] Using token: 9qmjrn.d0mlbc3mgxpuyc6y
	I0717 18:44:19.726925 1069276 out.go:204]   - Configuring RBAC rules ...
	I0717 18:44:19.727066 1069276 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:44:19.727173 1069276 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:44:19.727370 1069276 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:44:19.727518 1069276 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:44:19.727656 1069276 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:44:19.727782 1069276 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:44:19.727927 1069276 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:44:19.727990 1069276 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 18:44:19.728053 1069276 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 18:44:19.728061 1069276 kubeadm.go:322] 
	I0717 18:44:19.728132 1069276 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 18:44:19.728142 1069276 kubeadm.go:322] 
	I0717 18:44:19.728245 1069276 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 18:44:19.728258 1069276 kubeadm.go:322] 
	I0717 18:44:19.728286 1069276 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 18:44:19.728349 1069276 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:44:19.728403 1069276 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:44:19.728413 1069276 kubeadm.go:322] 
	I0717 18:44:19.728466 1069276 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 18:44:19.728476 1069276 kubeadm.go:322] 
	I0717 18:44:19.728544 1069276 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:44:19.728559 1069276 kubeadm.go:322] 
	I0717 18:44:19.728616 1069276 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 18:44:19.728750 1069276 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:44:19.728831 1069276 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:44:19.728837 1069276 kubeadm.go:322] 
	I0717 18:44:19.728931 1069276 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:44:19.729038 1069276 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 18:44:19.729050 1069276 kubeadm.go:322] 
	I0717 18:44:19.729148 1069276 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9qmjrn.d0mlbc3mgxpuyc6y \
	I0717 18:44:19.729279 1069276 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 18:44:19.729301 1069276 kubeadm.go:322] 	--control-plane 
	I0717 18:44:19.729308 1069276 kubeadm.go:322] 
	I0717 18:44:19.729394 1069276 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:44:19.729401 1069276 kubeadm.go:322] 
	I0717 18:44:19.729466 1069276 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9qmjrn.d0mlbc3mgxpuyc6y \
	I0717 18:44:19.729594 1069276 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 18:44:19.729617 1069276 cni.go:84] Creating CNI manager for ""
	I0717 18:44:19.729635 1069276 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:44:19.733069 1069276 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:44:19.734962 1069276 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:44:19.749280 1069276 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 18:44:19.828934 1069276 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:44:19.829072 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:19.829092 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=addons-962955 minikube.k8s.io/updated_at=2023_07_17T18_44_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:19.848470 1069276 ops.go:34] apiserver oom_adj: -16
	I0717 18:44:20.113107 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:20.728431 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:21.228018 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:21.728009 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:22.228473 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:22.728383 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:23.228313 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:23.728135 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:24.228319 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:24.728392 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:25.228331 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:25.727948 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:26.228427 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:26.727841 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:27.228557 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:27.728516 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:28.228593 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:28.728412 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:29.228627 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:29.727804 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:30.228743 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:30.727943 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:31.228497 1069276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:44:31.400261 1069276 kubeadm.go:1081] duration metric: took 11.571269668s to wait for elevateKubeSystemPrivileges.
	I0717 18:44:31.400314 1069276 kubeadm.go:406] StartCluster complete in 25.659148984s
	I0717 18:44:31.400341 1069276 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:44:31.400525 1069276 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 18:44:31.401235 1069276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:44:31.401503 1069276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 18:44:31.401590 1069276 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0717 18:44:31.401717 1069276 addons.go:69] Setting volumesnapshots=true in profile "addons-962955"
	I0717 18:44:31.401724 1069276 addons.go:69] Setting ingress=true in profile "addons-962955"
	I0717 18:44:31.401743 1069276 addons.go:231] Setting addon volumesnapshots=true in "addons-962955"
	I0717 18:44:31.401743 1069276 addons.go:69] Setting metrics-server=true in profile "addons-962955"
	I0717 18:44:31.401757 1069276 addons.go:69] Setting cloud-spanner=true in profile "addons-962955"
	I0717 18:44:31.401766 1069276 addons.go:231] Setting addon metrics-server=true in "addons-962955"
	I0717 18:44:31.401772 1069276 addons.go:231] Setting addon cloud-spanner=true in "addons-962955"
	I0717 18:44:31.401767 1069276 addons.go:69] Setting ingress-dns=true in profile "addons-962955"
	I0717 18:44:31.401790 1069276 addons.go:231] Setting addon ingress-dns=true in "addons-962955"
	I0717 18:44:31.401800 1069276 host.go:66] Checking if "addons-962955" exists ...
	I0717 18:44:31.401798 1069276 addons.go:69] Setting default-storageclass=true in profile "addons-962955"
	I0717 18:44:31.401818 1069276 host.go:66] Checking if "addons-962955" exists ...
	I0717 18:44:31.401824 1069276 addons.go:69] Setting gcp-auth=true in profile "addons-962955"
	I0717 18:44:31.401836 1069276 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-962955"
	I0717 18:44:31.401837 1069276 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-962955"
	I0717 18:44:31.401856 1069276 host.go:66] Checking if "addons-962955" exists ...
	I0717 18:44:31.401867 1069276 mustload.go:65] Loading cluster: addons-962955
	I0717 18:44:31.401860 1069276 config.go:182] Loaded profile config "addons-962955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 18:44:31.401917 1069276 addons.go:69] Setting helm-tiller=true in profile "addons-962955"
	I0717 18:44:31.401920 1069276 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-962955"
	I0717 18:44:31.401927 1069276 addons.go:231] Setting addon helm-tiller=true in "addons-962955"
	I0717 18:44:31.401965 1069276 host.go:66] Checking if "addons-962955" exists ...
	I0717 18:44:31.401971 1069276 addons.go:69] Setting inspektor-gadget=true in profile "addons-962955"
	I0717 18:44:31.401976 1069276 host.go:66] Checking if "addons-962955" exists ...
	I0717 18:44:31.401985 1069276 addons.go:231] Setting addon inspektor-gadget=true in "addons-962955"
	I0717 18:44:31.402017 1069276 host.go:66] Checking if "addons-962955" exists ...
	I0717 18:44:31.402101 1069276 config.go:182] Loaded profile config "addons-962955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 18:44:31.402329 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.402352 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.402358 1069276 addons.go:69] Setting storage-provisioner=true in profile "addons-962955"
	I0717 18:44:31.402373 1069276 addons.go:231] Setting addon storage-provisioner=true in "addons-962955"
	I0717 18:44:31.402377 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.402378 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.402393 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.401750 1069276 addons.go:231] Setting addon ingress=true in "addons-962955"
	I0717 18:44:31.402406 1069276 host.go:66] Checking if "addons-962955" exists ...
	I0717 18:44:31.402414 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.401818 1069276 host.go:66] Checking if "addons-962955" exists ...
	I0717 18:44:31.402426 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.402436 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.402486 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.402506 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.402369 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.402349 1069276 addons.go:69] Setting registry=true in profile "addons-962955"
	I0717 18:44:31.402555 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.402554 1069276 addons.go:231] Setting addon registry=true in "addons-962955"
	I0717 18:44:31.402590 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.402628 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.402642 1069276 host.go:66] Checking if "addons-962955" exists ...
	I0717 18:44:31.402770 1069276 host.go:66] Checking if "addons-962955" exists ...
	I0717 18:44:31.402830 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.402850 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.402909 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.402922 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.402946 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.402978 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.403002 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.403025 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.403129 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.403151 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.418880 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40951
	I0717 18:44:31.419055 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0717 18:44:31.426501 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.426676 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.427185 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.427217 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.427279 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.427308 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.427799 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.427802 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.428017 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:31.428458 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.428516 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.445123 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0717 18:44:31.445641 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.448773 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38923
	I0717 18:44:31.449646 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.450425 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.450451 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.450907 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.451165 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.451186 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.451586 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.451641 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.451699 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.452344 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.452390 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.455306 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0717 18:44:31.455457 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0717 18:44:31.455885 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.456585 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.456605 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.457901 1069276 addons.go:231] Setting addon default-storageclass=true in "addons-962955"
	I0717 18:44:31.457947 1069276 host.go:66] Checking if "addons-962955" exists ...
	I0717 18:44:31.458337 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.458373 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.458990 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.459581 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.460091 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.460117 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.460347 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46583
	I0717 18:44:31.460522 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.461191 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.461240 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.461759 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.461793 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.461934 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.462485 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.462502 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.462936 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.463470 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.463507 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.470257 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45271
	I0717 18:44:31.470915 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.471684 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.471718 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.472242 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.472454 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:31.473403 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0717 18:44:31.473856 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.474384 1069276 host.go:66] Checking if "addons-962955" exists ...
	I0717 18:44:31.474815 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.474857 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.475115 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.475128 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.475577 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.476186 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.476233 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.477930 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0717 18:44:31.478501 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.479018 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.479037 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.479442 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.479973 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.480013 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.484050 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0717 18:44:31.485284 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I0717 18:44:31.485803 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.486012 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41743
	I0717 18:44:31.486461 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.486484 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.486524 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.486935 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.487032 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.487049 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.487387 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.487451 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:31.488108 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.488160 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.488308 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.489325 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.489344 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.489874 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.489948 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:44:31.492816 1069276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 18:44:31.490502 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.493171 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36693
	I0717 18:44:31.497192 1069276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 18:44:31.495155 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.495720 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.496570 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0717 18:44:31.497527 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33285
	I0717 18:44:31.502484 1069276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 18:44:31.499716 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.500131 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.500176 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.501202 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I0717 18:44:31.506558 1069276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 18:44:31.504736 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.505510 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.505603 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.505730 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.508466 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.510253 1069276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 18:44:31.508586 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.509063 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.509128 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.509713 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.511948 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.513778 1069276 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 18:44:31.512296 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
	I0717 18:44:31.512323 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I0717 18:44:31.512727 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:31.512743 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.513190 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:31.513519 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.514593 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46415
	I0717 18:44:31.518036 1069276 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 18:44:31.515722 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:31.515946 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:31.515954 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:31.516250 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.516430 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37779
	I0717 18:44:31.516794 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.517971 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:44:31.517997 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.521728 1069276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 18:44:31.520495 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.520717 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.521890 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.522726 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.523296 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:44:31.523479 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:44:31.523494 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.523524 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.523566 1069276 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 18:44:31.524074 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.525134 1069276 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0717 18:44:31.526709 1069276 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:44:31.526734 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:44:31.526757 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:44:31.525157 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.525206 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.525231 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 18:44:31.526943 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:44:31.528747 1069276 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0717 18:44:31.525637 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.525725 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.527263 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.527362 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.530316 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39163
	I0717 18:44:31.530613 1069276 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 18:44:31.530761 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0717 18:44:31.530899 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:31.530935 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.530951 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.530973 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:44:31.531571 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:44:31.531791 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:44:31.532167 1069276 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.18.1
	I0717 18:44:31.534029 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.534073 1069276 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 18:44:31.534089 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 18:44:31.534114 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:44:31.532270 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 18:44:31.534145 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:44:31.532329 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:44:31.532349 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:44:31.532726 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:31.534200 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.532742 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:44:31.532779 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:44:31.532805 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:31.533067 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.534183 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.535220 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.535240 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.535303 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:44:31.535367 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.535394 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.535455 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:44:31.536074 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.535532 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:44:31.536341 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:31.537166 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:44:31.538437 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:44:31.538696 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.538938 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.541064 1069276 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 18:44:31.539276 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:31.539302 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:44:31.539541 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:44:31.539786 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:44:31.539904 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:44:31.540726 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:44:31.541683 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.542344 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:44:31.543046 1069276 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 18:44:31.543062 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 18:44:31.543083 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:44:31.543090 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.543048 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:44:31.543114 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.543196 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:44:31.543258 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:44:31.545310 1069276 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 18:44:31.543607 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:44:31.543632 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:44:31.544392 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45325
	I0717 18:44:31.546087 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46297
	I0717 18:44:31.546826 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.547896 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:44:31.548020 1069276 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 18:44:31.548236 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:44:31.548622 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:44:31.549826 1069276 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:44:31.549885 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:44:31.550193 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.550252 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:44:31.550242 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:44:31.550375 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:31.553245 1069276 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:44:31.551433 1069276 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0717 18:44:31.551451 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.551571 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:44:31.552126 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.552655 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:31.553330 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:44:31.555200 1069276 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 18:44:31.555458 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:44:31.556850 1069276 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	I0717 18:44:31.556887 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.556903 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:31.556919 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:44:31.560432 1069276 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 18:44:31.559055 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 18:44:31.559914 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.559941 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:31.560347 1069276 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0717 18:44:31.562127 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0717 18:44:31.562158 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:44:31.562226 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:44:31.562403 1069276 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 18:44:31.562427 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0717 18:44:31.562445 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:44:31.562508 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:31.562536 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.562562 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:31.564758 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:44:31.564770 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:44:31.564801 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.567040 1069276 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0717 18:44:31.565170 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:44:31.567650 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.568688 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:44:31.568691 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.567919 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:44:31.568711 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.570340 1069276 out.go:177]   - Using image docker.io/registry:2.8.1
	I0717 18:44:31.568094 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.568265 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:44:31.568930 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:44:31.569058 1069276 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:44:31.569168 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:44:31.569464 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:44:31.569590 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:44:31.572256 1069276 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 18:44:31.572239 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:44:31.572281 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 18:44:31.572307 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:44:31.572363 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:44:31.572388 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:44:31.572389 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.572367 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.573413 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:44:31.573419 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:44:31.573449 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:44:31.573669 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:44:31.573722 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:44:31.573783 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:44:31.573918 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:44:31.574047 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:44:31.574177 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:44:31.575835 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:44:31.575955 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.576276 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:44:31.576297 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.576418 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:44:31.576437 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:44:31.576513 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.576588 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:44:31.576685 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:44:31.576800 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:44:31.576837 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:31.576818 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:44:31.576964 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:44:31.577150 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:44:31.577286 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:44:31.577407 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:44:31.704163 1069276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 18:44:31.758449 1069276 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 18:44:31.758485 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 18:44:31.778705 1069276 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:44:31.778732 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 18:44:31.785027 1069276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 18:44:31.895606 1069276 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 18:44:31.895637 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 18:44:31.931613 1069276 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 18:44:31.931659 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 18:44:31.945403 1069276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:44:31.953542 1069276 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 18:44:31.953582 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 18:44:31.955344 1069276 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 18:44:31.955373 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 18:44:31.976104 1069276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 18:44:31.979651 1069276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:44:31.986493 1069276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 18:44:32.005183 1069276 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 18:44:32.005224 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 18:44:32.010357 1069276 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:44:32.010389 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:44:32.023564 1069276 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 18:44:32.023607 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 18:44:32.134675 1069276 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 18:44:32.134714 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 18:44:32.172898 1069276 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 18:44:32.172938 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 18:44:32.240390 1069276 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 18:44:32.240425 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 18:44:32.262083 1069276 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 18:44:32.262124 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 18:44:32.266859 1069276 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 18:44:32.266890 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 18:44:32.282914 1069276 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:44:32.282942 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:44:32.317524 1069276 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 18:44:32.317579 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 18:44:32.322394 1069276 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 18:44:32.322433 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 18:44:32.327405 1069276 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-962955" context rescaled to 1 replicas
	I0717 18:44:32.327467 1069276 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:44:32.329859 1069276 out.go:177] * Verifying Kubernetes components...
	I0717 18:44:32.332083 1069276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:44:32.378269 1069276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 18:44:32.381737 1069276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 18:44:32.398424 1069276 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 18:44:32.398468 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 18:44:32.414970 1069276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:44:32.453515 1069276 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 18:44:32.453551 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 18:44:32.455042 1069276 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 18:44:32.455080 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 18:44:32.549287 1069276 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 18:44:32.549319 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 18:44:32.579985 1069276 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 18:44:32.580021 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 18:44:32.621720 1069276 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 18:44:32.621751 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 18:44:32.649760 1069276 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 18:44:32.649796 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 18:44:32.687728 1069276 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 18:44:32.687768 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 18:44:32.699949 1069276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 18:44:32.718194 1069276 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 18:44:32.718236 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0717 18:44:32.751420 1069276 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 18:44:32.751466 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 18:44:32.794276 1069276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 18:44:32.855313 1069276 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 18:44:32.855344 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 18:44:32.901410 1069276 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 18:44:32.901450 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 18:44:32.974918 1069276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 18:44:35.393503 1069276 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.689279328s)
	I0717 18:44:35.393582 1069276 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 18:44:38.236958 1069276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.451889818s)
	I0717 18:44:38.237021 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:38.237039 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:38.237072 1069276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.291627869s)
	I0717 18:44:38.237141 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:38.237161 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:38.237347 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:38.237373 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:38.237390 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:38.237404 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:38.237716 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:38.237735 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:38.237856 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:38.237877 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:38.237853 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:38.237887 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:38.237896 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:38.238264 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:38.238334 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:38.238349 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:38.238375 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:38.238388 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:38.239670 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:38.239691 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:38.239670 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:39.079942 1069276 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 18:44:39.079991 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:44:39.083495 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:39.083967 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:44:39.084011 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:39.084279 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:44:39.084613 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:44:39.084832 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:44:39.085024 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:44:39.253118 1069276 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 18:44:39.299409 1069276 addons.go:231] Setting addon gcp-auth=true in "addons-962955"
	I0717 18:44:39.299495 1069276 host.go:66] Checking if "addons-962955" exists ...
	I0717 18:44:39.299920 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:39.299977 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:39.316289 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33889
	I0717 18:44:39.316828 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:39.317447 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:39.317503 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:39.317925 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:39.318509 1069276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:44:39.318556 1069276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:44:39.335014 1069276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
	I0717 18:44:39.335497 1069276 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:44:39.336148 1069276 main.go:141] libmachine: Using API Version  1
	I0717 18:44:39.336176 1069276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:44:39.336557 1069276 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:44:39.336802 1069276 main.go:141] libmachine: (addons-962955) Calling .GetState
	I0717 18:44:39.338723 1069276 main.go:141] libmachine: (addons-962955) Calling .DriverName
	I0717 18:44:39.339087 1069276 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 18:44:39.339127 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHHostname
	I0717 18:44:39.342429 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:39.342870 1069276 main.go:141] libmachine: (addons-962955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:53:85", ip: ""} in network mk-addons-962955: {Iface:virbr1 ExpiryTime:2023-07-17 19:43:48 +0000 UTC Type:0 Mac:52:54:00:e9:53:85 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-962955 Clientid:01:52:54:00:e9:53:85}
	I0717 18:44:39.342906 1069276 main.go:141] libmachine: (addons-962955) DBG | domain addons-962955 has defined IP address 192.168.39.215 and MAC address 52:54:00:e9:53:85 in network mk-addons-962955
	I0717 18:44:39.343084 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHPort
	I0717 18:44:39.343312 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHKeyPath
	I0717 18:44:39.343477 1069276 main.go:141] libmachine: (addons-962955) Calling .GetSSHUsername
	I0717 18:44:39.343631 1069276 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/addons-962955/id_rsa Username:docker}
	I0717 18:44:41.194092 1069276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.217928241s)
	I0717 18:44:41.194153 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.194163 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.194163 1069276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.21447598s)
	I0717 18:44:41.194206 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.194220 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.194266 1069276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.207730702s)
	I0717 18:44:41.194298 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.194314 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.194330 1069276 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (8.862213359s)
	I0717 18:44:41.194468 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:41.194487 1069276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.816174814s)
	I0717 18:44:41.194531 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.194541 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.194540 1069276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.812766809s)
	I0717 18:44:41.194569 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.194582 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.194706 1069276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.779703666s)
	I0717 18:44:41.194724 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.194733 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.194890 1069276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.494901352s)
	W0717 18:44:41.194924 1069276 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 18:44:41.194958 1069276 retry.go:31] will retry after 307.522355ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 18:44:41.194983 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.194997 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.195007 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.195015 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.195070 1069276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.400731623s)
	I0717 18:44:41.195095 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.195108 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.195136 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:41.195149 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.195158 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.195166 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.195175 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.195355 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.195366 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.195377 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.195384 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.195543 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:41.195573 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.195581 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.195589 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.195597 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.195675 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:41.195702 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.195710 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.195720 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.195728 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.195905 1069276 node_ready.go:35] waiting up to 6m0s for node "addons-962955" to be "Ready" ...
	I0717 18:44:41.196112 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:41.196140 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.196142 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:41.196148 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.196174 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.196184 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.196193 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.196203 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.196211 1069276 addons.go:467] Verifying addon ingress=true in "addons-962955"
	I0717 18:44:41.199858 1069276 out.go:177] * Verifying ingress addon...
	I0717 18:44:41.196193 1069276 addons.go:467] Verifying addon registry=true in "addons-962955"
	I0717 18:44:41.197447 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:41.197481 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.197506 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.197916 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:41.197947 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.197970 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:41.197996 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:41.198016 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.202307 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.202340 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.202363 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.202397 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.202425 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.202443 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.202453 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:41.202466 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:41.204733 1069276 out.go:177] * Verifying registry addon...
	I0717 18:44:41.202756 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:41.202790 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:41.203278 1069276 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 18:44:41.204388 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.204930 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.204433 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:41.205029 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:41.205054 1069276 addons.go:467] Verifying addon metrics-server=true in "addons-962955"
	I0717 18:44:41.207925 1069276 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 18:44:41.223056 1069276 node_ready.go:49] node "addons-962955" has status "Ready":"True"
	I0717 18:44:41.223083 1069276 node_ready.go:38] duration metric: took 27.161002ms waiting for node "addons-962955" to be "Ready" ...
	I0717 18:44:41.223094 1069276 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:44:41.233652 1069276 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 18:44:41.233678 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:41.235864 1069276 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 18:44:41.235887 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:41.239994 1069276 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace to be "Ready" ...
	I0717 18:44:41.502656 1069276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 18:44:41.773902 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:41.774053 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:42.207350 1069276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.232358433s)
	I0717 18:44:42.207375 1069276 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.868263294s)
	I0717 18:44:42.207423 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:42.207437 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:42.210035 1069276 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 18:44:42.207773 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:42.207816 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:42.212036 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:42.213821 1069276 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0717 18:44:42.212064 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:42.213879 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:42.215756 1069276 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 18:44:42.215779 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 18:44:42.214344 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:42.214387 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:42.215837 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:42.215861 1069276 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-962955"
	I0717 18:44:42.217822 1069276 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 18:44:42.220336 1069276 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 18:44:42.265840 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:42.271044 1069276 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 18:44:42.271074 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 18:44:42.317094 1069276 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 18:44:42.317122 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:42.332532 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:42.364961 1069276 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 18:44:42.364993 1069276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0717 18:44:42.415135 1069276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 18:44:42.793399 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:42.846252 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:42.904480 1069276 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 18:44:42.904518 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:43.253993 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:43.254185 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:43.336119 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:43.336206 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:43.774090 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:43.774796 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:43.834299 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:44.272984 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:44.274594 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:44.338164 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:44.749635 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:44.761486 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:44.850212 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:45.310440 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:45.314657 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:45.407595 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:45.411926 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:45.521165 1069276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.018434697s)
	I0717 18:44:45.521250 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:45.521269 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:45.521272 1069276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.106097084s)
	I0717 18:44:45.521317 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:45.521337 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:45.521827 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:45.521868 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:45.521879 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:45.521896 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:45.521905 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:45.521980 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:45.522028 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:45.522046 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:45.522076 1069276 main.go:141] libmachine: Making call to close driver server
	I0717 18:44:45.522088 1069276 main.go:141] libmachine: (addons-962955) Calling .Close
	I0717 18:44:45.522131 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:45.522150 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:45.522150 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:45.522704 1069276 main.go:141] libmachine: (addons-962955) DBG | Closing plugin on server side
	I0717 18:44:45.522746 1069276 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:44:45.522764 1069276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:44:45.524306 1069276 addons.go:467] Verifying addon gcp-auth=true in "addons-962955"
	I0717 18:44:45.527894 1069276 out.go:177] * Verifying gcp-auth addon...
	I0717 18:44:45.530522 1069276 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 18:44:45.544384 1069276 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 18:44:45.544420 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:45.751100 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:45.760453 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:45.823806 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:46.048748 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:46.241169 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:46.243696 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:46.324085 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:46.565730 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:46.740370 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:46.746334 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:46.826266 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:47.049208 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:47.246326 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:47.246390 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:47.324323 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:47.548979 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:47.740437 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:47.745370 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:47.783914 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:47.826206 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:48.050570 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:48.240770 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:48.243452 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:48.324401 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:48.549302 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:48.740147 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:48.745783 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:48.847430 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:49.050543 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:49.242902 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:49.243277 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:49.326132 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:49.551183 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:49.738685 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:49.743948 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:49.786093 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:49.858584 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:50.048884 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:50.264513 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:50.270087 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:50.328778 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:50.551145 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:50.742439 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:50.742601 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:50.823629 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:51.049274 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:51.244681 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:51.246559 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:51.353372 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:51.560977 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:51.749152 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:51.757101 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:51.791006 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:51.841099 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:52.061747 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:52.240700 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:52.243450 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:52.324248 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:52.557070 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:52.738639 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:52.741711 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:52.829796 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:53.048798 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:53.243111 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:53.247033 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:53.331149 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:53.555911 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:53.740082 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:53.757641 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:53.808294 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:53.838752 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:54.050307 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:54.252505 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:54.252693 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:54.343054 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:54.551636 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:54.749239 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:54.749483 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:54.830319 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:55.049361 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:55.284313 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:55.285283 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:55.333798 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:55.554436 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:55.743830 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:55.743846 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:55.835036 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:56.049311 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:56.240509 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:56.245016 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:56.288501 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:56.322975 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:56.549244 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:56.738548 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:56.752925 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:56.824052 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:57.050630 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:57.239434 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:57.243003 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:57.323738 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:57.550449 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:57.739511 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:57.743792 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:57.825843 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:58.258787 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:58.258804 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:58.259353 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:58.326779 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:58.550329 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:58.740516 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:58.742252 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:58.783351 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:58.824521 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:59.049607 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:59.255686 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:59.295071 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:59.324688 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:44:59.548935 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:44:59.753540 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:44:59.756094 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:44:59.823960 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:00.049621 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:00.242848 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:00.256138 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:00.323709 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:00.550956 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:00.738529 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:00.742082 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:00.837402 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:01.063060 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:01.247706 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:01.247856 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:01.286195 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:01.324929 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:01.548982 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:01.744512 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:01.754900 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:01.823719 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:02.049282 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:02.241460 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:02.247789 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:02.329433 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:02.548618 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:02.740035 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:02.741841 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:02.824666 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:03.050191 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:03.240319 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:03.242408 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:03.323718 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:03.549165 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:03.745747 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:03.745962 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:03.784212 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:03.830858 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:04.050030 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:04.239537 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:04.241536 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:04.323686 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:04.549061 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:04.741040 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:04.744657 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:04.823266 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:05.049443 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:05.239495 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:05.243064 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:05.322765 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:05.550201 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:05.741969 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:05.742753 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:05.785928 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:05.823417 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:06.048226 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:06.239816 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:06.241742 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:06.323467 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:06.549659 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:06.739139 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:06.742689 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:06.823520 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:07.049169 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:07.239661 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:07.240908 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:07.324595 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:07.549945 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:07.738866 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:07.741249 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:07.823295 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:08.053835 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:08.240474 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:08.242843 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:08.283872 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:08.324563 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:08.548859 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:08.738791 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:08.741292 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:08.823618 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:09.053681 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:09.239929 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:09.241315 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:09.323631 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:09.548819 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:09.740175 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:09.744727 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:09.823977 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:10.048931 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:10.238932 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:10.240979 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:10.334162 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:11.038745 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:11.063374 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:11.063888 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:11.068785 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:11.069044 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:11.073029 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:11.240441 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:11.243581 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:11.329152 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:11.549179 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:11.738966 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:11.743132 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:11.823691 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:12.050609 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:12.239942 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:12.261677 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:12.341670 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:12.552124 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:12.739616 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:12.742537 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:12.823084 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:13.049402 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:13.239119 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:13.242629 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:13.283865 1069276 pod_ready.go:102] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:13.347537 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:13.549441 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:13.739938 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:13.742208 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:13.783284 1069276 pod_ready.go:92] pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:13.783316 1069276 pod_ready.go:81] duration metric: took 32.543279435s waiting for pod "coredns-5d78c9869d-jjvff" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:13.783327 1069276 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-962955" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:13.797071 1069276 pod_ready.go:92] pod "etcd-addons-962955" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:13.797097 1069276 pod_ready.go:81] duration metric: took 13.763352ms waiting for pod "etcd-addons-962955" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:13.797108 1069276 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-962955" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:13.806492 1069276 pod_ready.go:92] pod "kube-apiserver-addons-962955" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:13.806527 1069276 pod_ready.go:81] duration metric: took 9.410596ms waiting for pod "kube-apiserver-addons-962955" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:13.806544 1069276 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-962955" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:13.832909 1069276 pod_ready.go:92] pod "kube-controller-manager-addons-962955" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:13.832947 1069276 pod_ready.go:81] duration metric: took 26.394275ms waiting for pod "kube-controller-manager-addons-962955" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:13.832966 1069276 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f77hz" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:13.833330 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:13.848545 1069276 pod_ready.go:92] pod "kube-proxy-f77hz" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:13.848569 1069276 pod_ready.go:81] duration metric: took 15.596256ms waiting for pod "kube-proxy-f77hz" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:13.848579 1069276 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-962955" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:14.049338 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:14.179778 1069276 pod_ready.go:92] pod "kube-scheduler-addons-962955" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:14.179817 1069276 pod_ready.go:81] duration metric: took 331.226173ms waiting for pod "kube-scheduler-addons-962955" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:14.179832 1069276 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-gvfk5" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:14.241449 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:14.246432 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:14.323778 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:14.549100 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:14.739011 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:14.740792 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:14.822920 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:15.049766 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:15.245726 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:15.247599 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:15.323142 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:15.548970 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:15.741487 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:15.743888 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:15.831346 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:16.048850 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:16.368309 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:16.374783 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:16.376394 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:16.549500 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:16.590276 1069276 pod_ready.go:102] pod "metrics-server-844d8db974-gvfk5" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:16.740386 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:16.749113 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:16.834366 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:17.049735 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:17.241244 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:17.242953 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:17.328769 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:17.549249 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:17.740090 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:17.743138 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:17.825017 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:18.049055 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:18.239364 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:18.245913 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:18.325542 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:18.558878 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:18.603639 1069276 pod_ready.go:102] pod "metrics-server-844d8db974-gvfk5" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:18.741919 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:18.746407 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:18.825300 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:19.049167 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:19.249437 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:19.249705 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:19.332098 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:19.550372 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:19.742008 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:19.775043 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:19.840544 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:20.048883 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:20.255351 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:20.290046 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:20.343325 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:20.551724 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:20.738537 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:20.750964 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:20.828848 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:21.049085 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:21.087574 1069276 pod_ready.go:102] pod "metrics-server-844d8db974-gvfk5" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:21.239721 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:21.243066 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:21.338432 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:21.575161 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:21.630181 1069276 pod_ready.go:92] pod "metrics-server-844d8db974-gvfk5" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:21.630208 1069276 pod_ready.go:81] duration metric: took 7.45036915s waiting for pod "metrics-server-844d8db974-gvfk5" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:21.630230 1069276 pod_ready.go:38] duration metric: took 40.407126044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:21.630276 1069276 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:45:21.630342 1069276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:45:21.658562 1069276 api_server.go:72] duration metric: took 49.331046536s to wait for apiserver process to appear ...
	I0717 18:45:21.658593 1069276 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:45:21.658614 1069276 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0717 18:45:21.663824 1069276 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0717 18:45:21.665067 1069276 api_server.go:141] control plane version: v1.27.3
	I0717 18:45:21.665093 1069276 api_server.go:131] duration metric: took 6.494384ms to wait for apiserver health ...
	I0717 18:45:21.665103 1069276 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:45:21.675820 1069276 system_pods.go:59] 17 kube-system pods found
	I0717 18:45:21.675857 1069276 system_pods.go:61] "coredns-5d78c9869d-jjvff" [d7ed0584-c74f-44df-996f-1c69319064f5] Running
	I0717 18:45:21.675863 1069276 system_pods.go:61] "csi-hostpath-attacher-0" [f4eb410f-090a-40f7-adbc-1f189e5f9095] Running
	I0717 18:45:21.675867 1069276 system_pods.go:61] "csi-hostpath-resizer-0" [01364ff1-0e36-4237-b3b1-c4633b645a26] Running
	I0717 18:45:21.675877 1069276 system_pods.go:61] "csi-hostpathplugin-qhct8" [8d7aebdd-9397-4c1d-92b5-d58ba8f52206] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 18:45:21.675884 1069276 system_pods.go:61] "etcd-addons-962955" [0028034b-63df-4da9-a667-c2b90b5f3e11] Running
	I0717 18:45:21.675889 1069276 system_pods.go:61] "kube-apiserver-addons-962955" [a728f553-9a66-474b-94fc-fe8f6da0c911] Running
	I0717 18:45:21.675894 1069276 system_pods.go:61] "kube-controller-manager-addons-962955" [0ad496e8-9f4e-4600-bdac-4d6d7cd99047] Running
	I0717 18:45:21.675898 1069276 system_pods.go:61] "kube-ingress-dns-minikube" [8370a070-a195-4a63-8f95-3fa42711cefa] Running
	I0717 18:45:21.675903 1069276 system_pods.go:61] "kube-proxy-f77hz" [82da3265-b108-49c0-be5e-6ebfb39832a8] Running
	I0717 18:45:21.675907 1069276 system_pods.go:61] "kube-scheduler-addons-962955" [b09cb359-77c3-4dd8-abdd-7def3af77ab5] Running
	I0717 18:45:21.675911 1069276 system_pods.go:61] "metrics-server-844d8db974-gvfk5" [04b104ed-620e-4c2c-835f-4817b395d35b] Running
	I0717 18:45:21.675917 1069276 system_pods.go:61] "registry-proxy-glc8x" [e1e6ea98-4e93-499c-982c-fe125d4fb16d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 18:45:21.675924 1069276 system_pods.go:61] "registry-qgwcj" [a58688b7-a416-473a-8314-6cd11129080a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 18:45:21.675934 1069276 system_pods.go:61] "snapshot-controller-75bbb956b9-5lnlr" [54015ada-487d-4e84-9548-4cb8f88c965f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 18:45:21.675945 1069276 system_pods.go:61] "snapshot-controller-75bbb956b9-tx6cs" [2e78e264-4b16-4781-b3dc-55dff902738f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 18:45:21.675954 1069276 system_pods.go:61] "storage-provisioner" [00adf529-ba9e-4cf9-b0a0-b328a57293ac] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:21.675962 1069276 system_pods.go:61] "tiller-deploy-6847666dc-gk8xr" [838657fc-cef5-4ca0-8c2b-73dcac62920b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0717 18:45:21.675976 1069276 system_pods.go:74] duration metric: took 10.867477ms to wait for pod list to return data ...
	I0717 18:45:21.675985 1069276 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:45:21.678886 1069276 default_sa.go:45] found service account: "default"
	I0717 18:45:21.678910 1069276 default_sa.go:55] duration metric: took 2.919093ms for default service account to be created ...
	I0717 18:45:21.678919 1069276 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:45:21.696622 1069276 system_pods.go:86] 17 kube-system pods found
	I0717 18:45:21.696664 1069276 system_pods.go:89] "coredns-5d78c9869d-jjvff" [d7ed0584-c74f-44df-996f-1c69319064f5] Running
	I0717 18:45:21.696671 1069276 system_pods.go:89] "csi-hostpath-attacher-0" [f4eb410f-090a-40f7-adbc-1f189e5f9095] Running
	I0717 18:45:21.696676 1069276 system_pods.go:89] "csi-hostpath-resizer-0" [01364ff1-0e36-4237-b3b1-c4633b645a26] Running
	I0717 18:45:21.696684 1069276 system_pods.go:89] "csi-hostpathplugin-qhct8" [8d7aebdd-9397-4c1d-92b5-d58ba8f52206] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 18:45:21.696690 1069276 system_pods.go:89] "etcd-addons-962955" [0028034b-63df-4da9-a667-c2b90b5f3e11] Running
	I0717 18:45:21.696695 1069276 system_pods.go:89] "kube-apiserver-addons-962955" [a728f553-9a66-474b-94fc-fe8f6da0c911] Running
	I0717 18:45:21.696701 1069276 system_pods.go:89] "kube-controller-manager-addons-962955" [0ad496e8-9f4e-4600-bdac-4d6d7cd99047] Running
	I0717 18:45:21.696708 1069276 system_pods.go:89] "kube-ingress-dns-minikube" [8370a070-a195-4a63-8f95-3fa42711cefa] Running
	I0717 18:45:21.696711 1069276 system_pods.go:89] "kube-proxy-f77hz" [82da3265-b108-49c0-be5e-6ebfb39832a8] Running
	I0717 18:45:21.696715 1069276 system_pods.go:89] "kube-scheduler-addons-962955" [b09cb359-77c3-4dd8-abdd-7def3af77ab5] Running
	I0717 18:45:21.696719 1069276 system_pods.go:89] "metrics-server-844d8db974-gvfk5" [04b104ed-620e-4c2c-835f-4817b395d35b] Running
	I0717 18:45:21.696724 1069276 system_pods.go:89] "registry-proxy-glc8x" [e1e6ea98-4e93-499c-982c-fe125d4fb16d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 18:45:21.696730 1069276 system_pods.go:89] "registry-qgwcj" [a58688b7-a416-473a-8314-6cd11129080a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 18:45:21.696737 1069276 system_pods.go:89] "snapshot-controller-75bbb956b9-5lnlr" [54015ada-487d-4e84-9548-4cb8f88c965f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 18:45:21.696744 1069276 system_pods.go:89] "snapshot-controller-75bbb956b9-tx6cs" [2e78e264-4b16-4781-b3dc-55dff902738f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 18:45:21.696752 1069276 system_pods.go:89] "storage-provisioner" [00adf529-ba9e-4cf9-b0a0-b328a57293ac] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:21.696762 1069276 system_pods.go:89] "tiller-deploy-6847666dc-gk8xr" [838657fc-cef5-4ca0-8c2b-73dcac62920b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0717 18:45:21.696772 1069276 system_pods.go:126] duration metric: took 17.845557ms to wait for k8s-apps to be running ...
	I0717 18:45:21.696787 1069276 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:45:21.696838 1069276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:21.715018 1069276 system_svc.go:56] duration metric: took 18.206844ms WaitForService to wait for kubelet.
	I0717 18:45:21.715055 1069276 kubeadm.go:581] duration metric: took 49.387550088s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 18:45:21.715080 1069276 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:45:21.718506 1069276 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 18:45:21.718567 1069276 node_conditions.go:123] node cpu capacity is 2
	I0717 18:45:21.718581 1069276 node_conditions.go:105] duration metric: took 3.495292ms to run NodePressure ...
	I0717 18:45:21.718611 1069276 start.go:228] waiting for startup goroutines ...
	I0717 18:45:21.738658 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:21.740860 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:21.823888 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:22.049415 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:22.261280 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:22.264636 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:22.325515 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:22.553062 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:22.739625 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:22.741360 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:22.824477 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:23.053690 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:23.239818 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:23.244091 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:23.358430 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:23.553238 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:23.739581 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:23.742012 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:23.832080 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:24.049345 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:24.239108 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:24.241545 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:24.324105 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:24.551326 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:24.739450 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:24.746251 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:24.829545 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:25.048855 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:25.238675 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:25.240879 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:25.322879 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:25.551195 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:25.742717 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:25.749825 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:25.827350 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:26.051420 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:26.240367 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:26.243116 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:26.327160 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:26.549662 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:26.740396 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:26.749177 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:26.823877 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:27.048989 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:27.240222 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:27.242833 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:27.326990 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:27.549841 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:27.740766 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:27.742956 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:27.823566 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:28.049399 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:28.238828 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:28.242442 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:28.323525 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:28.555951 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:28.850629 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:28.853864 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:28.859334 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:29.051057 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:29.242702 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:29.245145 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:29.324353 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:29.548765 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:29.742857 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:29.743709 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:29.826461 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:30.050256 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:30.273727 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:30.273975 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:30.329099 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:30.553531 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:30.754483 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:30.758005 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:31.063112 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:31.063987 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:31.240370 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:31.241543 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:31.324649 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:31.548838 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:31.739762 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:31.742556 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:31.823028 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:32.049273 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:32.239959 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:32.250641 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:32.323851 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:32.555163 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:32.738937 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:32.743074 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:32.823815 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:33.049442 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:33.239047 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:33.242368 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:33.324334 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:33.553378 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:33.740510 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:33.741156 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:33.824846 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:34.048866 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:34.239919 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:34.244892 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:34.324295 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:34.548673 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:34.739774 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:34.746714 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:34.829947 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:35.049085 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:35.238689 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:35.241620 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:35.324363 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:35.548804 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:35.739895 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:35.751579 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:35.823820 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:36.050270 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:36.241370 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:36.244840 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:36.324738 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:36.557258 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:36.741519 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:36.748896 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:36.940070 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:37.133170 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:37.243706 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:37.247770 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:37.325059 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:37.549922 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:37.744909 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:37.748695 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:37.829499 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:38.050818 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:38.242450 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:38.244098 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:38.324409 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:38.557627 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:38.743671 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:38.745767 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:38.839332 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:39.050258 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:39.240275 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:39.242693 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:45:39.323495 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:39.549053 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:39.742334 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:39.745448 1069276 kapi.go:107] duration metric: took 58.537518389s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 18:45:39.841690 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:40.058108 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:40.239240 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:40.323292 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:40.555851 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:40.742966 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:40.825210 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:41.048943 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:41.238372 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:41.327570 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:41.550082 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:41.739265 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:41.826441 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:42.049458 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:42.241261 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:42.329215 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:42.550749 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:42.743862 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:42.822909 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:43.090344 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:43.344908 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:43.344942 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:43.560491 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:43.740169 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:43.822884 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:44.049081 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:44.238344 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:44.324590 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:44.549266 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:44.740201 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:44.825620 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:45.048577 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:45.239355 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:45.324194 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:45.550997 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:45.752261 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:45.824188 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:46.053742 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:46.240252 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:46.327455 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:46.548774 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:46.739992 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:46.824780 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:47.049007 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:47.239320 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:47.323356 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:47.548761 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:47.739993 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:47.826236 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:48.049711 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:48.239807 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:48.325653 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:48.549039 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:48.738887 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:48.824262 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:49.048506 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:49.240140 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:49.323874 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:45:49.564262 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:49.739350 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:49.825413 1069276 kapi.go:107] duration metric: took 1m7.605069436s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 18:45:50.049391 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:50.244072 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:50.548571 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:50.741293 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:51.049174 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:51.238987 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:51.549104 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:51.747042 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:52.053498 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:52.241608 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:52.548913 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:52.740014 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:53.049515 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:53.239751 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:53.549046 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:53.738864 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:54.054428 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:54.242310 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:54.549858 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:54.740755 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:55.050387 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:55.240071 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:55.549621 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:55.740009 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:56.050011 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:56.239246 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:56.549247 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:56.740428 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:57.049303 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:57.242586 1069276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:45:57.552688 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:57.740682 1069276 kapi.go:107] duration metric: took 1m16.537401674s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 18:45:58.049727 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:58.553425 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:59.049509 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:45:59.549193 1069276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:00.050035 1069276 kapi.go:107] duration metric: took 1m14.519505708s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 18:46:00.052639 1069276 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-962955 cluster.
	I0717 18:46:00.054672 1069276 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 18:46:00.056638 1069276 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 18:46:00.058689 1069276 out.go:177] * Enabled addons: ingress-dns, default-storageclass, helm-tiller, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0717 18:46:00.060578 1069276 addons.go:502] enable addons completed in 1m28.658984947s: enabled=[ingress-dns default-storageclass helm-tiller cloud-spanner storage-provisioner inspektor-gadget metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0717 18:46:00.060626 1069276 start.go:233] waiting for cluster config update ...
	I0717 18:46:00.060647 1069276 start.go:242] writing updated cluster config ...
	I0717 18:46:00.060962 1069276 ssh_runner.go:195] Run: rm -f paused
	I0717 18:46:00.119618 1069276 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 18:46:00.122326 1069276 out.go:177] * Done! kubectl is now configured to use "addons-962955" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 18:43:45 UTC, ends at Mon 2023-07-17 18:48:56 UTC. --
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.456459880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1f10bda0-f3a4-4f3b-a21f-3bac12a203ce name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.456812388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:96405cb02b01e2b03673aff33dadb92d7dcc76129cee583a8dba305be60b36e5,PodSandboxId:ce83f56b05293b003e46a761fcedffbf2c9eb52bdd284a5ce086a27c5cbaa8b8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689619728138109093,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-skxnl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 96c6d749-3842-4f90-9830-9f8a72df7c80,},Annotations:map[string]string{io.kubernetes.container.hash: 63a0e66f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc566dbe541395dc0d8ba3a9aee521a88cd84d33ac1e170e19b85d78e29af6,PodSandboxId:1c08d21914fa0f898aacff35c164e24109a743efd91b87172aabdee6a60c259b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689619586880049734,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20b6362d-9006-45c4-8ad5-31a8d7b2269d,},Annotations:map[string]string{io.kubernet
es.container.hash: cfd56cb1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4658fe6322f0ea4d660354d1d951915cceb5495aaeb7543c9cb1210b76d900,PodSandboxId:b297b48fcf66ae1f87ead733d38137152a6e78cfc6cf2b86190aad101da57558,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689619568779216946,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-nld6z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ee8c7553-ddfd-4186-9c43-d21610f26972,},Annotations:map[string]string{io.kubernetes.container.hash: f46d7a95,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28fea37ea3ff3a176237e04a113e4472f51e2ac954a170e4cb6fc0d9b46a478,PodSandboxId:71b660086048bb156c1348de28629365f9a688af96369262c791b0e5026cf7b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689619558540187304,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-hktrs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cd41ae9a-6007-4317-8b82-6e40729c530d,},Annotations:map[string]string{io.kubernetes.container.hash: 1d7444f8,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ef3fc2ac077fddb9696295599fd2ea576669262060a1c57c24c97ad719da81,PodSandboxId:fccb518b32ffbb0fd6a9e25929dd7481ec158b31b4534d30099e9efb6ff7daff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619526394056561,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mp4sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d047f76a-35cb-4426-8e6b-e7af3ce13f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 33ab19a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894fcddfd0b63d395e234a4dac5b4adbb0cec6a44a134ae2c67534e93b9f1f48,PodSandboxId:eaf8825a8549ce6acba70b0e2ade1a7f9b54bd843013ae04489547516db48561,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-cert
gen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619522390540307,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nk254,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28640f5f-0678-4cc8-a907-c28e9be776ba,},Annotations:map[string]string{io.kubernetes.container.hash: 835b9453,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088f919f2ee5230339b8b8cec2b04226fc0c7fadbc1ca9b04f3592c4ac88460c,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-prov
isioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689619521464721708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ebd34c56b382ce5821084b795b713d1b3e8ba6850973297fa14ad86287f6cb,PodSandboxId:23b12278e0ff76a1611e9dddf36e0e135e3e10bec00aa59b4ad907bf5d580f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689619491681034540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f77hz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82da3265-b108-49c0-be5e-6ebfb39832a8,},Annotations:map[string]string{io.kubernetes.container.hash: 79caa543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e8253e3d276bc05695ef7cd810dcb2eadf498290410cde63b20facd0321b77,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689619488655481627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ddadfa25d4f82406d962ba967b83f3eac95fe41753fc47c811f17fc7bd4f08d,PodSandboxId:c14595fef13fae6e9b8496cd58ca9611c881571bc09a8d1e6323b67b0d6e9527,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689619478301404939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-jjvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7ed0584-c74f-44df-996f-1c69319064f5,},Annotations:map[string]string{io.kubernetes.container.hash: b9f45280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cd1756ecfba2901a7efe8c295c4097a44e9c86fa2512d2b65d8f0e708444b7,PodSandboxId:344b61465ecbc13fb8a1c0687dcd612c8b46a81bdd0d5cfff668d81760d27b51,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{I
mage:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689619451447969447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36a67ffa2f607f3b3daf122ae6adf6e4,},Annotations:map[string]string{io.kubernetes.container.hash: 2c25e98d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d8789057484631a5734bef91278461e451f2e9626d361b55474230203c236,PodSandboxId:7196a49d9b9ef2dd5f9c574d75fc27abea515c80e3a1e091f33b12eade2d6796,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d
94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689619451337702565,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6147f1cb481e74c3859bb9d1cba50269,},Annotations:map[string]string{io.kubernetes.container.hash: fcf5ac72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eed3bed9358a2c4d1c4345919c221238c399c1ba82c35757488923fa9df4279,PodSandboxId:ab97789a317db185a91b823145a3bed6ec3853c694a9bed7feec233dc5ceda6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc
930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689619451233993456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8e7a8e6227b251318d866127a9a6b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:780aa100b582282e4bc349f63bd05143d8aa40d1e82edb3c2f0fe232ff286825,PodSandboxId:eff8b4e433d93b6459441b272158e1348d3279a1226cc819ad83d4d4010aae99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3
a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689619451115266825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19c3fe63480577575af7baec98918154,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1f10bda0-f3a4-4f3b-a21f-3bac12a203ce name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.495241334Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ddc6dc4e-318f-47fa-90a2-fea02403c2e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.495314731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ddc6dc4e-318f-47fa-90a2-fea02403c2e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.495675258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:96405cb02b01e2b03673aff33dadb92d7dcc76129cee583a8dba305be60b36e5,PodSandboxId:ce83f56b05293b003e46a761fcedffbf2c9eb52bdd284a5ce086a27c5cbaa8b8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689619728138109093,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-skxnl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 96c6d749-3842-4f90-9830-9f8a72df7c80,},Annotations:map[string]string{io.kubernetes.container.hash: 63a0e66f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc566dbe541395dc0d8ba3a9aee521a88cd84d33ac1e170e19b85d78e29af6,PodSandboxId:1c08d21914fa0f898aacff35c164e24109a743efd91b87172aabdee6a60c259b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689619586880049734,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20b6362d-9006-45c4-8ad5-31a8d7b2269d,},Annotations:map[string]string{io.kubernet
es.container.hash: cfd56cb1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4658fe6322f0ea4d660354d1d951915cceb5495aaeb7543c9cb1210b76d900,PodSandboxId:b297b48fcf66ae1f87ead733d38137152a6e78cfc6cf2b86190aad101da57558,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689619568779216946,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-nld6z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ee8c7553-ddfd-4186-9c43-d21610f26972,},Annotations:map[string]string{io.kubernetes.container.hash: f46d7a95,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28fea37ea3ff3a176237e04a113e4472f51e2ac954a170e4cb6fc0d9b46a478,PodSandboxId:71b660086048bb156c1348de28629365f9a688af96369262c791b0e5026cf7b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689619558540187304,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-hktrs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cd41ae9a-6007-4317-8b82-6e40729c530d,},Annotations:map[string]string{io.kubernetes.container.hash: 1d7444f8,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ef3fc2ac077fddb9696295599fd2ea576669262060a1c57c24c97ad719da81,PodSandboxId:fccb518b32ffbb0fd6a9e25929dd7481ec158b31b4534d30099e9efb6ff7daff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619526394056561,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mp4sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d047f76a-35cb-4426-8e6b-e7af3ce13f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 33ab19a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894fcddfd0b63d395e234a4dac5b4adbb0cec6a44a134ae2c67534e93b9f1f48,PodSandboxId:eaf8825a8549ce6acba70b0e2ade1a7f9b54bd843013ae04489547516db48561,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-cert
gen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619522390540307,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nk254,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28640f5f-0678-4cc8-a907-c28e9be776ba,},Annotations:map[string]string{io.kubernetes.container.hash: 835b9453,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088f919f2ee5230339b8b8cec2b04226fc0c7fadbc1ca9b04f3592c4ac88460c,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-prov
isioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689619521464721708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ebd34c56b382ce5821084b795b713d1b3e8ba6850973297fa14ad86287f6cb,PodSandboxId:23b12278e0ff76a1611e9dddf36e0e135e3e10bec00aa59b4ad907bf5d580f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689619491681034540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f77hz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82da3265-b108-49c0-be5e-6ebfb39832a8,},Annotations:map[string]string{io.kubernetes.container.hash: 79caa543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e8253e3d276bc05695ef7cd810dcb2eadf498290410cde63b20facd0321b77,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689619488655481627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ddadfa25d4f82406d962ba967b83f3eac95fe41753fc47c811f17fc7bd4f08d,PodSandboxId:c14595fef13fae6e9b8496cd58ca9611c881571bc09a8d1e6323b67b0d6e9527,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689619478301404939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-jjvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7ed0584-c74f-44df-996f-1c69319064f5,},Annotations:map[string]string{io.kubernetes.container.hash: b9f45280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cd1756ecfba2901a7efe8c295c4097a44e9c86fa2512d2b65d8f0e708444b7,PodSandboxId:344b61465ecbc13fb8a1c0687dcd612c8b46a81bdd0d5cfff668d81760d27b51,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{I
mage:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689619451447969447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36a67ffa2f607f3b3daf122ae6adf6e4,},Annotations:map[string]string{io.kubernetes.container.hash: 2c25e98d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d8789057484631a5734bef91278461e451f2e9626d361b55474230203c236,PodSandboxId:7196a49d9b9ef2dd5f9c574d75fc27abea515c80e3a1e091f33b12eade2d6796,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d
94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689619451337702565,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6147f1cb481e74c3859bb9d1cba50269,},Annotations:map[string]string{io.kubernetes.container.hash: fcf5ac72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eed3bed9358a2c4d1c4345919c221238c399c1ba82c35757488923fa9df4279,PodSandboxId:ab97789a317db185a91b823145a3bed6ec3853c694a9bed7feec233dc5ceda6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc
930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689619451233993456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8e7a8e6227b251318d866127a9a6b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:780aa100b582282e4bc349f63bd05143d8aa40d1e82edb3c2f0fe232ff286825,PodSandboxId:eff8b4e433d93b6459441b272158e1348d3279a1226cc819ad83d4d4010aae99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3
a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689619451115266825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19c3fe63480577575af7baec98918154,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ddc6dc4e-318f-47fa-90a2-fea02403c2e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.540478899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=79a11ffe-ceb9-4e89-b2fa-1f358c5543a5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.540633572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=79a11ffe-ceb9-4e89-b2fa-1f358c5543a5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.540989711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:96405cb02b01e2b03673aff33dadb92d7dcc76129cee583a8dba305be60b36e5,PodSandboxId:ce83f56b05293b003e46a761fcedffbf2c9eb52bdd284a5ce086a27c5cbaa8b8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689619728138109093,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-skxnl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 96c6d749-3842-4f90-9830-9f8a72df7c80,},Annotations:map[string]string{io.kubernetes.container.hash: 63a0e66f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc566dbe541395dc0d8ba3a9aee521a88cd84d33ac1e170e19b85d78e29af6,PodSandboxId:1c08d21914fa0f898aacff35c164e24109a743efd91b87172aabdee6a60c259b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689619586880049734,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20b6362d-9006-45c4-8ad5-31a8d7b2269d,},Annotations:map[string]string{io.kubernet
es.container.hash: cfd56cb1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4658fe6322f0ea4d660354d1d951915cceb5495aaeb7543c9cb1210b76d900,PodSandboxId:b297b48fcf66ae1f87ead733d38137152a6e78cfc6cf2b86190aad101da57558,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689619568779216946,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-nld6z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ee8c7553-ddfd-4186-9c43-d21610f26972,},Annotations:map[string]string{io.kubernetes.container.hash: f46d7a95,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28fea37ea3ff3a176237e04a113e4472f51e2ac954a170e4cb6fc0d9b46a478,PodSandboxId:71b660086048bb156c1348de28629365f9a688af96369262c791b0e5026cf7b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689619558540187304,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-hktrs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cd41ae9a-6007-4317-8b82-6e40729c530d,},Annotations:map[string]string{io.kubernetes.container.hash: 1d7444f8,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ef3fc2ac077fddb9696295599fd2ea576669262060a1c57c24c97ad719da81,PodSandboxId:fccb518b32ffbb0fd6a9e25929dd7481ec158b31b4534d30099e9efb6ff7daff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619526394056561,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mp4sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d047f76a-35cb-4426-8e6b-e7af3ce13f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 33ab19a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894fcddfd0b63d395e234a4dac5b4adbb0cec6a44a134ae2c67534e93b9f1f48,PodSandboxId:eaf8825a8549ce6acba70b0e2ade1a7f9b54bd843013ae04489547516db48561,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-cert
gen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619522390540307,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nk254,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28640f5f-0678-4cc8-a907-c28e9be776ba,},Annotations:map[string]string{io.kubernetes.container.hash: 835b9453,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088f919f2ee5230339b8b8cec2b04226fc0c7fadbc1ca9b04f3592c4ac88460c,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-prov
isioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689619521464721708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ebd34c56b382ce5821084b795b713d1b3e8ba6850973297fa14ad86287f6cb,PodSandboxId:23b12278e0ff76a1611e9dddf36e0e135e3e10bec00aa59b4ad907bf5d580f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689619491681034540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f77hz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82da3265-b108-49c0-be5e-6ebfb39832a8,},Annotations:map[string]string{io.kubernetes.container.hash: 79caa543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e8253e3d276bc05695ef7cd810dcb2eadf498290410cde63b20facd0321b77,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689619488655481627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ddadfa25d4f82406d962ba967b83f3eac95fe41753fc47c811f17fc7bd4f08d,PodSandboxId:c14595fef13fae6e9b8496cd58ca9611c881571bc09a8d1e6323b67b0d6e9527,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689619478301404939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-jjvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7ed0584-c74f-44df-996f-1c69319064f5,},Annotations:map[string]string{io.kubernetes.container.hash: b9f45280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cd1756ecfba2901a7efe8c295c4097a44e9c86fa2512d2b65d8f0e708444b7,PodSandboxId:344b61465ecbc13fb8a1c0687dcd612c8b46a81bdd0d5cfff668d81760d27b51,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{I
mage:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689619451447969447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36a67ffa2f607f3b3daf122ae6adf6e4,},Annotations:map[string]string{io.kubernetes.container.hash: 2c25e98d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d8789057484631a5734bef91278461e451f2e9626d361b55474230203c236,PodSandboxId:7196a49d9b9ef2dd5f9c574d75fc27abea515c80e3a1e091f33b12eade2d6796,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d
94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689619451337702565,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6147f1cb481e74c3859bb9d1cba50269,},Annotations:map[string]string{io.kubernetes.container.hash: fcf5ac72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eed3bed9358a2c4d1c4345919c221238c399c1ba82c35757488923fa9df4279,PodSandboxId:ab97789a317db185a91b823145a3bed6ec3853c694a9bed7feec233dc5ceda6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc
930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689619451233993456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8e7a8e6227b251318d866127a9a6b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:780aa100b582282e4bc349f63bd05143d8aa40d1e82edb3c2f0fe232ff286825,PodSandboxId:eff8b4e433d93b6459441b272158e1348d3279a1226cc819ad83d4d4010aae99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3
a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689619451115266825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19c3fe63480577575af7baec98918154,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=79a11ffe-ceb9-4e89-b2fa-1f358c5543a5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.576338984Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b81947d7-9ced-4e89-9f6b-ebd51c731a8f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.576444540Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b81947d7-9ced-4e89-9f6b-ebd51c731a8f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.576773879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:96405cb02b01e2b03673aff33dadb92d7dcc76129cee583a8dba305be60b36e5,PodSandboxId:ce83f56b05293b003e46a761fcedffbf2c9eb52bdd284a5ce086a27c5cbaa8b8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689619728138109093,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-skxnl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 96c6d749-3842-4f90-9830-9f8a72df7c80,},Annotations:map[string]string{io.kubernetes.container.hash: 63a0e66f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc566dbe541395dc0d8ba3a9aee521a88cd84d33ac1e170e19b85d78e29af6,PodSandboxId:1c08d21914fa0f898aacff35c164e24109a743efd91b87172aabdee6a60c259b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689619586880049734,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20b6362d-9006-45c4-8ad5-31a8d7b2269d,},Annotations:map[string]string{io.kubernet
es.container.hash: cfd56cb1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4658fe6322f0ea4d660354d1d951915cceb5495aaeb7543c9cb1210b76d900,PodSandboxId:b297b48fcf66ae1f87ead733d38137152a6e78cfc6cf2b86190aad101da57558,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689619568779216946,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-nld6z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ee8c7553-ddfd-4186-9c43-d21610f26972,},Annotations:map[string]string{io.kubernetes.container.hash: f46d7a95,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28fea37ea3ff3a176237e04a113e4472f51e2ac954a170e4cb6fc0d9b46a478,PodSandboxId:71b660086048bb156c1348de28629365f9a688af96369262c791b0e5026cf7b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689619558540187304,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-hktrs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cd41ae9a-6007-4317-8b82-6e40729c530d,},Annotations:map[string]string{io.kubernetes.container.hash: 1d7444f8,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ef3fc2ac077fddb9696295599fd2ea576669262060a1c57c24c97ad719da81,PodSandboxId:fccb518b32ffbb0fd6a9e25929dd7481ec158b31b4534d30099e9efb6ff7daff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619526394056561,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mp4sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d047f76a-35cb-4426-8e6b-e7af3ce13f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 33ab19a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894fcddfd0b63d395e234a4dac5b4adbb0cec6a44a134ae2c67534e93b9f1f48,PodSandboxId:eaf8825a8549ce6acba70b0e2ade1a7f9b54bd843013ae04489547516db48561,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-cert
gen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619522390540307,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nk254,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28640f5f-0678-4cc8-a907-c28e9be776ba,},Annotations:map[string]string{io.kubernetes.container.hash: 835b9453,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088f919f2ee5230339b8b8cec2b04226fc0c7fadbc1ca9b04f3592c4ac88460c,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-prov
isioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689619521464721708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ebd34c56b382ce5821084b795b713d1b3e8ba6850973297fa14ad86287f6cb,PodSandboxId:23b12278e0ff76a1611e9dddf36e0e135e3e10bec00aa59b4ad907bf5d580f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689619491681034540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f77hz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82da3265-b108-49c0-be5e-6ebfb39832a8,},Annotations:map[string]string{io.kubernetes.container.hash: 79caa543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e8253e3d276bc05695ef7cd810dcb2eadf498290410cde63b20facd0321b77,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689619488655481627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ddadfa25d4f82406d962ba967b83f3eac95fe41753fc47c811f17fc7bd4f08d,PodSandboxId:c14595fef13fae6e9b8496cd58ca9611c881571bc09a8d1e6323b67b0d6e9527,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689619478301404939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-jjvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7ed0584-c74f-44df-996f-1c69319064f5,},Annotations:map[string]string{io.kubernetes.container.hash: b9f45280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cd1756ecfba2901a7efe8c295c4097a44e9c86fa2512d2b65d8f0e708444b7,PodSandboxId:344b61465ecbc13fb8a1c0687dcd612c8b46a81bdd0d5cfff668d81760d27b51,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{I
mage:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689619451447969447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36a67ffa2f607f3b3daf122ae6adf6e4,},Annotations:map[string]string{io.kubernetes.container.hash: 2c25e98d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d8789057484631a5734bef91278461e451f2e9626d361b55474230203c236,PodSandboxId:7196a49d9b9ef2dd5f9c574d75fc27abea515c80e3a1e091f33b12eade2d6796,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d
94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689619451337702565,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6147f1cb481e74c3859bb9d1cba50269,},Annotations:map[string]string{io.kubernetes.container.hash: fcf5ac72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eed3bed9358a2c4d1c4345919c221238c399c1ba82c35757488923fa9df4279,PodSandboxId:ab97789a317db185a91b823145a3bed6ec3853c694a9bed7feec233dc5ceda6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc
930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689619451233993456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8e7a8e6227b251318d866127a9a6b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:780aa100b582282e4bc349f63bd05143d8aa40d1e82edb3c2f0fe232ff286825,PodSandboxId:eff8b4e433d93b6459441b272158e1348d3279a1226cc819ad83d4d4010aae99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3
a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689619451115266825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19c3fe63480577575af7baec98918154,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b81947d7-9ced-4e89-9f6b-ebd51c731a8f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.614984354Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=abe87419-a3fc-427c-a67c-61dff2bdff24 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.615084266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=abe87419-a3fc-427c-a67c-61dff2bdff24 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.615402130Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:96405cb02b01e2b03673aff33dadb92d7dcc76129cee583a8dba305be60b36e5,PodSandboxId:ce83f56b05293b003e46a761fcedffbf2c9eb52bdd284a5ce086a27c5cbaa8b8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689619728138109093,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-skxnl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 96c6d749-3842-4f90-9830-9f8a72df7c80,},Annotations:map[string]string{io.kubernetes.container.hash: 63a0e66f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc566dbe541395dc0d8ba3a9aee521a88cd84d33ac1e170e19b85d78e29af6,PodSandboxId:1c08d21914fa0f898aacff35c164e24109a743efd91b87172aabdee6a60c259b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689619586880049734,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20b6362d-9006-45c4-8ad5-31a8d7b2269d,},Annotations:map[string]string{io.kubernet
es.container.hash: cfd56cb1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4658fe6322f0ea4d660354d1d951915cceb5495aaeb7543c9cb1210b76d900,PodSandboxId:b297b48fcf66ae1f87ead733d38137152a6e78cfc6cf2b86190aad101da57558,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689619568779216946,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-nld6z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ee8c7553-ddfd-4186-9c43-d21610f26972,},Annotations:map[string]string{io.kubernetes.container.hash: f46d7a95,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28fea37ea3ff3a176237e04a113e4472f51e2ac954a170e4cb6fc0d9b46a478,PodSandboxId:71b660086048bb156c1348de28629365f9a688af96369262c791b0e5026cf7b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689619558540187304,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-hktrs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cd41ae9a-6007-4317-8b82-6e40729c530d,},Annotations:map[string]string{io.kubernetes.container.hash: 1d7444f8,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ef3fc2ac077fddb9696295599fd2ea576669262060a1c57c24c97ad719da81,PodSandboxId:fccb518b32ffbb0fd6a9e25929dd7481ec158b31b4534d30099e9efb6ff7daff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619526394056561,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mp4sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d047f76a-35cb-4426-8e6b-e7af3ce13f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 33ab19a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894fcddfd0b63d395e234a4dac5b4adbb0cec6a44a134ae2c67534e93b9f1f48,PodSandboxId:eaf8825a8549ce6acba70b0e2ade1a7f9b54bd843013ae04489547516db48561,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-cert
gen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619522390540307,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nk254,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28640f5f-0678-4cc8-a907-c28e9be776ba,},Annotations:map[string]string{io.kubernetes.container.hash: 835b9453,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088f919f2ee5230339b8b8cec2b04226fc0c7fadbc1ca9b04f3592c4ac88460c,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-prov
isioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689619521464721708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ebd34c56b382ce5821084b795b713d1b3e8ba6850973297fa14ad86287f6cb,PodSandboxId:23b12278e0ff76a1611e9dddf36e0e135e3e10bec00aa59b4ad907bf5d580f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689619491681034540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f77hz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82da3265-b108-49c0-be5e-6ebfb39832a8,},Annotations:map[string]string{io.kubernetes.container.hash: 79caa543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e8253e3d276bc05695ef7cd810dcb2eadf498290410cde63b20facd0321b77,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689619488655481627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ddadfa25d4f82406d962ba967b83f3eac95fe41753fc47c811f17fc7bd4f08d,PodSandboxId:c14595fef13fae6e9b8496cd58ca9611c881571bc09a8d1e6323b67b0d6e9527,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689619478301404939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-jjvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7ed0584-c74f-44df-996f-1c69319064f5,},Annotations:map[string]string{io.kubernetes.container.hash: b9f45280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cd1756ecfba2901a7efe8c295c4097a44e9c86fa2512d2b65d8f0e708444b7,PodSandboxId:344b61465ecbc13fb8a1c0687dcd612c8b46a81bdd0d5cfff668d81760d27b51,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{I
mage:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689619451447969447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36a67ffa2f607f3b3daf122ae6adf6e4,},Annotations:map[string]string{io.kubernetes.container.hash: 2c25e98d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d8789057484631a5734bef91278461e451f2e9626d361b55474230203c236,PodSandboxId:7196a49d9b9ef2dd5f9c574d75fc27abea515c80e3a1e091f33b12eade2d6796,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d
94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689619451337702565,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6147f1cb481e74c3859bb9d1cba50269,},Annotations:map[string]string{io.kubernetes.container.hash: fcf5ac72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eed3bed9358a2c4d1c4345919c221238c399c1ba82c35757488923fa9df4279,PodSandboxId:ab97789a317db185a91b823145a3bed6ec3853c694a9bed7feec233dc5ceda6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc
930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689619451233993456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8e7a8e6227b251318d866127a9a6b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:780aa100b582282e4bc349f63bd05143d8aa40d1e82edb3c2f0fe232ff286825,PodSandboxId:eff8b4e433d93b6459441b272158e1348d3279a1226cc819ad83d4d4010aae99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3
a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689619451115266825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19c3fe63480577575af7baec98918154,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=abe87419-a3fc-427c-a67c-61dff2bdff24 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.648666046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a02333ad-f04d-4736-a9e5-1603b6cccd7e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.648767974Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a02333ad-f04d-4736-a9e5-1603b6cccd7e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.649186383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:96405cb02b01e2b03673aff33dadb92d7dcc76129cee583a8dba305be60b36e5,PodSandboxId:ce83f56b05293b003e46a761fcedffbf2c9eb52bdd284a5ce086a27c5cbaa8b8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689619728138109093,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-skxnl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 96c6d749-3842-4f90-9830-9f8a72df7c80,},Annotations:map[string]string{io.kubernetes.container.hash: 63a0e66f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc566dbe541395dc0d8ba3a9aee521a88cd84d33ac1e170e19b85d78e29af6,PodSandboxId:1c08d21914fa0f898aacff35c164e24109a743efd91b87172aabdee6a60c259b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689619586880049734,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20b6362d-9006-45c4-8ad5-31a8d7b2269d,},Annotations:map[string]string{io.kubernet
es.container.hash: cfd56cb1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4658fe6322f0ea4d660354d1d951915cceb5495aaeb7543c9cb1210b76d900,PodSandboxId:b297b48fcf66ae1f87ead733d38137152a6e78cfc6cf2b86190aad101da57558,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689619568779216946,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-nld6z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ee8c7553-ddfd-4186-9c43-d21610f26972,},Annotations:map[string]string{io.kubernetes.container.hash: f46d7a95,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28fea37ea3ff3a176237e04a113e4472f51e2ac954a170e4cb6fc0d9b46a478,PodSandboxId:71b660086048bb156c1348de28629365f9a688af96369262c791b0e5026cf7b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689619558540187304,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-hktrs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cd41ae9a-6007-4317-8b82-6e40729c530d,},Annotations:map[string]string{io.kubernetes.container.hash: 1d7444f8,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ef3fc2ac077fddb9696295599fd2ea576669262060a1c57c24c97ad719da81,PodSandboxId:fccb518b32ffbb0fd6a9e25929dd7481ec158b31b4534d30099e9efb6ff7daff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619526394056561,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mp4sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d047f76a-35cb-4426-8e6b-e7af3ce13f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 33ab19a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894fcddfd0b63d395e234a4dac5b4adbb0cec6a44a134ae2c67534e93b9f1f48,PodSandboxId:eaf8825a8549ce6acba70b0e2ade1a7f9b54bd843013ae04489547516db48561,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-cert
gen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619522390540307,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nk254,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28640f5f-0678-4cc8-a907-c28e9be776ba,},Annotations:map[string]string{io.kubernetes.container.hash: 835b9453,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088f919f2ee5230339b8b8cec2b04226fc0c7fadbc1ca9b04f3592c4ac88460c,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-prov
isioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689619521464721708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ebd34c56b382ce5821084b795b713d1b3e8ba6850973297fa14ad86287f6cb,PodSandboxId:23b12278e0ff76a1611e9dddf36e0e135e3e10bec00aa59b4ad907bf5d580f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689619491681034540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f77hz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82da3265-b108-49c0-be5e-6ebfb39832a8,},Annotations:map[string]string{io.kubernetes.container.hash: 79caa543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e8253e3d276bc05695ef7cd810dcb2eadf498290410cde63b20facd0321b77,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689619488655481627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ddadfa25d4f82406d962ba967b83f3eac95fe41753fc47c811f17fc7bd4f08d,PodSandboxId:c14595fef13fae6e9b8496cd58ca9611c881571bc09a8d1e6323b67b0d6e9527,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689619478301404939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-jjvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7ed0584-c74f-44df-996f-1c69319064f5,},Annotations:map[string]string{io.kubernetes.container.hash: b9f45280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cd1756ecfba2901a7efe8c295c4097a44e9c86fa2512d2b65d8f0e708444b7,PodSandboxId:344b61465ecbc13fb8a1c0687dcd612c8b46a81bdd0d5cfff668d81760d27b51,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{I
mage:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689619451447969447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36a67ffa2f607f3b3daf122ae6adf6e4,},Annotations:map[string]string{io.kubernetes.container.hash: 2c25e98d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d8789057484631a5734bef91278461e451f2e9626d361b55474230203c236,PodSandboxId:7196a49d9b9ef2dd5f9c574d75fc27abea515c80e3a1e091f33b12eade2d6796,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d
94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689619451337702565,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6147f1cb481e74c3859bb9d1cba50269,},Annotations:map[string]string{io.kubernetes.container.hash: fcf5ac72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eed3bed9358a2c4d1c4345919c221238c399c1ba82c35757488923fa9df4279,PodSandboxId:ab97789a317db185a91b823145a3bed6ec3853c694a9bed7feec233dc5ceda6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc
930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689619451233993456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8e7a8e6227b251318d866127a9a6b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:780aa100b582282e4bc349f63bd05143d8aa40d1e82edb3c2f0fe232ff286825,PodSandboxId:eff8b4e433d93b6459441b272158e1348d3279a1226cc819ad83d4d4010aae99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3
a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689619451115266825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19c3fe63480577575af7baec98918154,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a02333ad-f04d-4736-a9e5-1603b6cccd7e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.668290884Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=2cf1a713-4fbc-42f8-be78-69774eb37070 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.668813223Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ce83f56b05293b003e46a761fcedffbf2c9eb52bdd284a5ce086a27c5cbaa8b8,Metadata:&PodSandboxMetadata{Name:hello-world-app-65bdb79f98-skxnl,Uid:96c6d749-3842-4f90-9830-9f8a72df7c80,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689619725801730228,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-65bdb79f98-skxnl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 96c6d749-3842-4f90-9830-9f8a72df7c80,pod-template-hash: 65bdb79f98,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T18:48:45.447561830Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c08d21914fa0f898aacff35c164e24109a743efd91b87172aabdee6a60c259b,Metadata:&PodSandboxMetadata{Name:nginx,Uid:20b6362d-9006-45c4-8ad5-31a8d7b2269d,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1689619577805783348,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20b6362d-9006-45c4-8ad5-31a8d7b2269d,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T18:46:17.407213574Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b297b48fcf66ae1f87ead733d38137152a6e78cfc6cf2b86190aad101da57558,Metadata:&PodSandboxMetadata{Name:headlamp-66f6498c69-nld6z,Uid:ee8c7553-ddfd-4186-9c43-d21610f26972,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689619562410563055,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-66f6498c69-nld6z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ee8c7553-ddfd-4186-9c43-d21610f26972,pod-template-hash: 66f6498c69,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-
07-17T18:46:02.070177991Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:71b660086048bb156c1348de28629365f9a688af96369262c791b0e5026cf7b5,Metadata:&PodSandboxMetadata{Name:gcp-auth-58478865f7-hktrs,Uid:cd41ae9a-6007-4317-8b82-6e40729c530d,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689619550845569179,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-58478865f7-hktrs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cd41ae9a-6007-4317-8b82-6e40729c530d,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 58478865f7,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T18:44:45.363791512Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:460af693b3f3ae9aee6a4ed4ac6add77753940f5b14536905a4f57fa4e66cd33,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-7799c6795f-vmpkq,Uid:00a93ffe-c6e6-42bb-9bb1-4e5f6373a052,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NO
TREADY,CreatedAt:1689619545460723341,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-7799c6795f-vmpkq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 00a93ffe-c6e6-42bb-9bb1-4e5f6373a052,pod-template-hash: 7799c6795f,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T18:44:41.046981095Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eaf8825a8549ce6acba70b0e2ade1a7f9b54bd843013ae04489547516db48561,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-nk254,Uid:28640f5f-0678-4cc8-a907-c28e9be776ba,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1689619481537754189,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.
kubernetes.io/controller-uid: d8fa8f8b-1d73-4c4e-8916-aaf5ac41f79d,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: d8fa8f8b-1d73-4c4e-8916-aaf5ac41f79d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-nk254,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28640f5f-0678-4cc8-a907-c28e9be776ba,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T18:44:41.174996822Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fccb518b32ffbb0fd6a9e25929dd7481ec158b31b4534d30099e9efb6ff7daff,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-mp4sv,Uid:d047f76a-35cb-4426-8e6b-e7af3ce13f3e,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1689619481508010102,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controlle
r-uid: 3630dbcb-1592-499c-bee7-00b3decc4406,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 3630dbcb-1592-499c-bee7-00b3decc4406,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-mp4sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d047f76a-35cb-4426-8e6b-e7af3ce13f3e,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T18:44:41.164792544Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:00adf529-ba9e-4cf9-b0a0-b328a57293ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689619480429471266,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-07-17T18:44:39.771114156Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:25fddd33150ad5cb41fd59e8430c90f32421c7c3277afe9d00594cb645df50d1,Metadata:&PodSandboxMetadata{Nam
e:kube-ingress-dns-minikube,Uid:8370a070-a195-4a63-8f95-3fa42711cefa,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1689619479528577148,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8370a070-a195-4a63-8f95-3fa42711cefa,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce
6cf215a75b78f05f\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2023-07-17T18:44:38.543077428Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c14595fef13fae6e9b8496cd58ca9611c881571bc09a8d1e6323b67b0d6e9527,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-jjvff,Uid:d7ed0584-c74f-44df-996f-1c69319064f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689619473294254476,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-jjvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7ed0584-c74f-44df-996f-1c69319064f5,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T18:44:32.955930670Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:23b12278e0ff7
6a1611e9dddf36e0e135e3e10bec00aa59b4ad907bf5d580f36,Metadata:&PodSandboxMetadata{Name:kube-proxy-f77hz,Uid:82da3265-b108-49c0-be5e-6ebfb39832a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689619473129611159,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-f77hz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82da3265-b108-49c0-be5e-6ebfb39832a8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T18:44:31.277319458Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab97789a317db185a91b823145a3bed6ec3853c694a9bed7feec233dc5ceda6c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-962955,Uid:bc8e7a8e6227b251318d866127a9a6b1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689619450520620732,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,i
o.kubernetes.pod.name: kube-controller-manager-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8e7a8e6227b251318d866127a9a6b1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bc8e7a8e6227b251318d866127a9a6b1,kubernetes.io/config.seen: 2023-07-17T18:44:09.689058399Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:344b61465ecbc13fb8a1c0687dcd612c8b46a81bdd0d5cfff668d81760d27b51,Metadata:&PodSandboxMetadata{Name:etcd-addons-962955,Uid:36a67ffa2f607f3b3daf122ae6adf6e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689619450508582658,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36a67ffa2f607f3b3daf122ae6adf6e4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.215:2379,kubernetes.io/config.hash: 36a67ffa2f607f3b
3daf122ae6adf6e4,kubernetes.io/config.seen: 2023-07-17T18:44:09.689051543Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eff8b4e433d93b6459441b272158e1348d3279a1226cc819ad83d4d4010aae99,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-962955,Uid:19c3fe63480577575af7baec98918154,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689619450450570897,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19c3fe63480577575af7baec98918154,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 19c3fe63480577575af7baec98918154,kubernetes.io/config.seen: 2023-07-17T18:44:09.689059429Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7196a49d9b9ef2dd5f9c574d75fc27abea515c80e3a1e091f33b12eade2d6796,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-962955,Uid:6147f1cb481e74c3859bb9d1c
ba50269,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689619450420263968,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6147f1cb481e74c3859bb9d1cba50269,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.215:8443,kubernetes.io/config.hash: 6147f1cb481e74c3859bb9d1cba50269,kubernetes.io/config.seen: 2023-07-17T18:44:09.689057069Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=2cf1a713-4fbc-42f8-be78-69774eb37070 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.669985487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=95dc5a7d-be22-4e67-a648-b262c0255e6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.670195684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=95dc5a7d-be22-4e67-a648-b262c0255e6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.672017081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:96405cb02b01e2b03673aff33dadb92d7dcc76129cee583a8dba305be60b36e5,PodSandboxId:ce83f56b05293b003e46a761fcedffbf2c9eb52bdd284a5ce086a27c5cbaa8b8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689619728138109093,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-skxnl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 96c6d749-3842-4f90-9830-9f8a72df7c80,},Annotations:map[string]string{io.kubernetes.container.hash: 63a0e66f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc566dbe541395dc0d8ba3a9aee521a88cd84d33ac1e170e19b85d78e29af6,PodSandboxId:1c08d21914fa0f898aacff35c164e24109a743efd91b87172aabdee6a60c259b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689619586880049734,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20b6362d-9006-45c4-8ad5-31a8d7b2269d,},Annotations:map[string]string{io.kubernet
es.container.hash: cfd56cb1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4658fe6322f0ea4d660354d1d951915cceb5495aaeb7543c9cb1210b76d900,PodSandboxId:b297b48fcf66ae1f87ead733d38137152a6e78cfc6cf2b86190aad101da57558,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689619568779216946,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-nld6z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ee8c7553-ddfd-4186-9c43-d21610f26972,},Annotations:map[string]string{io.kubernetes.container.hash: f46d7a95,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28fea37ea3ff3a176237e04a113e4472f51e2ac954a170e4cb6fc0d9b46a478,PodSandboxId:71b660086048bb156c1348de28629365f9a688af96369262c791b0e5026cf7b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689619558540187304,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-hktrs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cd41ae9a-6007-4317-8b82-6e40729c530d,},Annotations:map[string]string{io.kubernetes.container.hash: 1d7444f8,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ef3fc2ac077fddb9696295599fd2ea576669262060a1c57c24c97ad719da81,PodSandboxId:fccb518b32ffbb0fd6a9e25929dd7481ec158b31b4534d30099e9efb6ff7daff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619526394056561,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mp4sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d047f76a-35cb-4426-8e6b-e7af3ce13f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 33ab19a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894fcddfd0b63d395e234a4dac5b4adbb0cec6a44a134ae2c67534e93b9f1f48,PodSandboxId:eaf8825a8549ce6acba70b0e2ade1a7f9b54bd843013ae04489547516db48561,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-cert
gen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619522390540307,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nk254,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28640f5f-0678-4cc8-a907-c28e9be776ba,},Annotations:map[string]string{io.kubernetes.container.hash: 835b9453,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088f919f2ee5230339b8b8cec2b04226fc0c7fadbc1ca9b04f3592c4ac88460c,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-prov
isioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689619521464721708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ebd34c56b382ce5821084b795b713d1b3e8ba6850973297fa14ad86287f6cb,PodSandboxId:23b12278e0ff76a1611e9dddf36e0e135e3e10bec00aa59b4ad907bf5d580f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689619491681034540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f77hz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82da3265-b108-49c0-be5e-6ebfb39832a8,},Annotations:map[string]string{io.kubernetes.container.hash: 79caa543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e8253e3d276bc05695ef7cd810dcb2eadf498290410cde63b20facd0321b77,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689619488655481627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ddadfa25d4f82406d962ba967b83f3eac95fe41753fc47c811f17fc7bd4f08d,PodSandboxId:c14595fef13fae6e9b8496cd58ca9611c881571bc09a8d1e6323b67b0d6e9527,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689619478301404939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-jjvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7ed0584-c74f-44df-996f-1c69319064f5,},Annotations:map[string]string{io.kubernetes.container.hash: b9f45280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cd1756ecfba2901a7efe8c295c4097a44e9c86fa2512d2b65d8f0e708444b7,PodSandboxId:344b61465ecbc13fb8a1c0687dcd612c8b46a81bdd0d5cfff668d81760d27b51,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{I
mage:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689619451447969447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36a67ffa2f607f3b3daf122ae6adf6e4,},Annotations:map[string]string{io.kubernetes.container.hash: 2c25e98d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d8789057484631a5734bef91278461e451f2e9626d361b55474230203c236,PodSandboxId:7196a49d9b9ef2dd5f9c574d75fc27abea515c80e3a1e091f33b12eade2d6796,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d
94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689619451337702565,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6147f1cb481e74c3859bb9d1cba50269,},Annotations:map[string]string{io.kubernetes.container.hash: fcf5ac72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eed3bed9358a2c4d1c4345919c221238c399c1ba82c35757488923fa9df4279,PodSandboxId:ab97789a317db185a91b823145a3bed6ec3853c694a9bed7feec233dc5ceda6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc
930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689619451233993456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8e7a8e6227b251318d866127a9a6b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:780aa100b582282e4bc349f63bd05143d8aa40d1e82edb3c2f0fe232ff286825,PodSandboxId:eff8b4e433d93b6459441b272158e1348d3279a1226cc819ad83d4d4010aae99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3
a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689619451115266825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19c3fe63480577575af7baec98918154,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=95dc5a7d-be22-4e67-a648-b262c0255e6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.692911103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9badd2f4-8ecf-47a3-ac25-94c86c280c8c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.693031836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9badd2f4-8ecf-47a3-ac25-94c86c280c8c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 18:48:56 addons-962955 crio[713]: time="2023-07-17 18:48:56.693569135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:96405cb02b01e2b03673aff33dadb92d7dcc76129cee583a8dba305be60b36e5,PodSandboxId:ce83f56b05293b003e46a761fcedffbf2c9eb52bdd284a5ce086a27c5cbaa8b8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689619728138109093,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-skxnl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 96c6d749-3842-4f90-9830-9f8a72df7c80,},Annotations:map[string]string{io.kubernetes.container.hash: 63a0e66f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addc566dbe541395dc0d8ba3a9aee521a88cd84d33ac1e170e19b85d78e29af6,PodSandboxId:1c08d21914fa0f898aacff35c164e24109a743efd91b87172aabdee6a60c259b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689619586880049734,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 20b6362d-9006-45c4-8ad5-31a8d7b2269d,},Annotations:map[string]string{io.kubernet
es.container.hash: cfd56cb1,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4658fe6322f0ea4d660354d1d951915cceb5495aaeb7543c9cb1210b76d900,PodSandboxId:b297b48fcf66ae1f87ead733d38137152a6e78cfc6cf2b86190aad101da57558,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689619568779216946,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-nld6z,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ee8c7553-ddfd-4186-9c43-d21610f26972,},Annotations:map[string]string{io.kubernetes.container.hash: f46d7a95,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28fea37ea3ff3a176237e04a113e4472f51e2ac954a170e4cb6fc0d9b46a478,PodSandboxId:71b660086048bb156c1348de28629365f9a688af96369262c791b0e5026cf7b5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689619558540187304,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-hktrs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cd41ae9a-6007-4317-8b82-6e40729c530d,},Annotations:map[string]string{io.kubernetes.container.hash: 1d7444f8,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ef3fc2ac077fddb9696295599fd2ea576669262060a1c57c24c97ad719da81,PodSandboxId:fccb518b32ffbb0fd6a9e25929dd7481ec158b31b4534d30099e9efb6ff7daff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619526394056561,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mp4sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d047f76a-35cb-4426-8e6b-e7af3ce13f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 33ab19a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:894fcddfd0b63d395e234a4dac5b4adbb0cec6a44a134ae2c67534e93b9f1f48,PodSandboxId:eaf8825a8549ce6acba70b0e2ade1a7f9b54bd843013ae04489547516db48561,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-cert
gen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689619522390540307,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nk254,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28640f5f-0678-4cc8-a907-c28e9be776ba,},Annotations:map[string]string{io.kubernetes.container.hash: 835b9453,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088f919f2ee5230339b8b8cec2b04226fc0c7fadbc1ca9b04f3592c4ac88460c,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-prov
isioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689619521464721708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ebd34c56b382ce5821084b795b713d1b3e8ba6850973297fa14ad86287f6cb,PodSandboxId:23b12278e0ff76a1611e9dddf36e0e135e3e10bec00aa59b4ad907bf5d580f36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689619491681034540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f77hz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82da3265-b108-49c0-be5e-6ebfb39832a8,},Annotations:map[string]string{io.kubernetes.container.hash: 79caa543,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e8253e3d276bc05695ef7cd810dcb2eadf498290410cde63b20facd0321b77,PodSandboxId:06a36d27567fbc91ae01ae557ad00637d5f7b21fc2ea570cc7db222f6defca24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689619488655481627,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00adf529-ba9e-4cf9-b0a0-b328a57293ac,},Annotations:map[string]string{io.kubernetes.container.hash: c7ce65b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ddadfa25d4f82406d962ba967b83f3eac95fe41753fc47c811f17fc7bd4f08d,PodSandboxId:c14595fef13fae6e9b8496cd58ca9611c881571bc09a8d1e6323b67b0d6e9527,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689619478301404939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-jjvff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7ed0584-c74f-44df-996f-1c69319064f5,},Annotations:map[string]string{io.kubernetes.container.hash: b9f45280,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cd1756ecfba2901a7efe8c295c4097a44e9c86fa2512d2b65d8f0e708444b7,PodSandboxId:344b61465ecbc13fb8a1c0687dcd612c8b46a81bdd0d5cfff668d81760d27b51,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{I
mage:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689619451447969447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36a67ffa2f607f3b3daf122ae6adf6e4,},Annotations:map[string]string{io.kubernetes.container.hash: 2c25e98d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d8789057484631a5734bef91278461e451f2e9626d361b55474230203c236,PodSandboxId:7196a49d9b9ef2dd5f9c574d75fc27abea515c80e3a1e091f33b12eade2d6796,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d
94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689619451337702565,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6147f1cb481e74c3859bb9d1cba50269,},Annotations:map[string]string{io.kubernetes.container.hash: fcf5ac72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eed3bed9358a2c4d1c4345919c221238c399c1ba82c35757488923fa9df4279,PodSandboxId:ab97789a317db185a91b823145a3bed6ec3853c694a9bed7feec233dc5ceda6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc
930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689619451233993456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8e7a8e6227b251318d866127a9a6b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:780aa100b582282e4bc349f63bd05143d8aa40d1e82edb3c2f0fe232ff286825,PodSandboxId:eff8b4e433d93b6459441b272158e1348d3279a1226cc819ad83d4d4010aae99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3
a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689619451115266825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-962955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19c3fe63480577575af7baec98918154,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9badd2f4-8ecf-47a3-ac25-94c86c280c8c name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID
	96405cb02b01e       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      8 seconds ago       Running             hello-world-app           0                   ce83f56b05293
	addc566dbe541       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                              2 minutes ago       Running             nginx                     0                   1c08d21914fa0
	da4658fe6322f       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                        2 minutes ago       Running             headlamp                  0                   b297b48fcf66a
	f28fea37ea3ff       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   71b660086048b
	86ef3fc2ac077       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   fccb518b32ffb
	894fcddfd0b63       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   eaf8825a8549c
	088f919f2ee52       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       1                   06a36d27567fb
	58ebd34c56b38       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                                             4 minutes ago       Running             kube-proxy                0                   23b12278e0ff7
	72e8253e3d276       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Exited              storage-provisioner       0                   06a36d27567fb
	8ddadfa25d4f8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   c14595fef13fa
	55cd1756ecfba       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                                             4 minutes ago       Running             etcd                      0                   344b61465ecbc
	f82d878905748       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                                             4 minutes ago       Running             kube-apiserver            0                   7196a49d9b9ef
	3eed3bed9358a       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                                             4 minutes ago       Running             kube-controller-manager   0                   ab97789a317db
	780aa100b5822       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                                             4 minutes ago       Running             kube-scheduler            0                   eff8b4e433d93
	
	* 
	* ==> coredns [8ddadfa25d4f82406d962ba967b83f3eac95fe41753fc47c811f17fc7bd4f08d] <==
	* [INFO] 10.244.0.5:55485 - 57129 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000173061s
	[INFO] 10.244.0.5:38875 - 23725 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081851s
	[INFO] 10.244.0.5:38875 - 54702 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000063795s
	[INFO] 10.244.0.5:40564 - 45425 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000077165s
	[INFO] 10.244.0.5:40564 - 38259 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064782s
	[INFO] 10.244.0.5:40652 - 37981 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000078666s
	[INFO] 10.244.0.5:40652 - 2396 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067036s
	[INFO] 10.244.0.5:52357 - 2074 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000082478s
	[INFO] 10.244.0.5:52357 - 29982 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000039641s
	[INFO] 10.244.0.5:58433 - 65354 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000050734s
	[INFO] 10.244.0.5:58433 - 8271 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000049194s
	[INFO] 10.244.0.5:44681 - 2212 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000025657s
	[INFO] 10.244.0.5:44681 - 22698 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003694s
	[INFO] 10.244.0.5:33884 - 43218 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000052808s
	[INFO] 10.244.0.5:33884 - 23248 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000051682s
	[INFO] 10.244.0.19:50450 - 45843 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000571174s
	[INFO] 10.244.0.19:40779 - 57556 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000093537s
	[INFO] 10.244.0.19:56740 - 6805 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000251632s
	[INFO] 10.244.0.19:57948 - 38055 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081172s
	[INFO] 10.244.0.19:50704 - 62888 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000261897s
	[INFO] 10.244.0.19:36538 - 5102 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114471s
	[INFO] 10.244.0.19:40663 - 45541 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000997003s
	[INFO] 10.244.0.19:33556 - 51346 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002236036s
	[INFO] 10.244.0.22:55868 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000308695s
	[INFO] 10.244.0.22:51564 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000115143s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-962955
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-962955
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=addons-962955
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T18_44_19_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-962955
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 18:44:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-962955
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 18:48:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 18:48:55 +0000   Mon, 17 Jul 2023 18:44:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 18:48:55 +0000   Mon, 17 Jul 2023 18:44:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 18:48:55 +0000   Mon, 17 Jul 2023 18:44:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 18:48:55 +0000   Mon, 17 Jul 2023 18:44:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    addons-962955
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa438350fb8e403b81e2bf324d01ed37
	  System UUID:                aa438350-fb8e-403b-81e2-bf324d01ed37
	  Boot ID:                    9b44ebcf-fbd1-4a87-88b6-967946bcf03c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-skxnl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  gcp-auth                    gcp-auth-58478865f7-hktrs                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	  headlamp                    headlamp-66f6498c69-nld6z                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 coredns-5d78c9869d-jjvff                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m25s
	  kube-system                 etcd-addons-962955                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-apiserver-addons-962955             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-controller-manager-addons-962955    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-proxy-f77hz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-scheduler-addons-962955             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  Starting                 4m38s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m38s  kubelet          Node addons-962955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s  kubelet          Node addons-962955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s  kubelet          Node addons-962955 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m38s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m37s  kubelet          Node addons-962955 status is now: NodeReady
	  Normal  RegisteredNode           4m26s  node-controller  Node addons-962955 event: Registered Node addons-962955 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.099914] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.481674] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.703401] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156097] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.087095] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.165167] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.123545] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.156024] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.106518] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.243921] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[Jul17 18:44] systemd-fstab-generator[907]: Ignoring "noauto" for root device
	[ +10.350764] systemd-fstab-generator[1246]: Ignoring "noauto" for root device
	[ +24.644054] kauditd_printk_skb: 48 callbacks suppressed
	[ +10.402126] kauditd_printk_skb: 10 callbacks suppressed
	[Jul17 18:45] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.091932] kauditd_printk_skb: 16 callbacks suppressed
	[ +24.056615] kauditd_printk_skb: 2 callbacks suppressed
	[Jul17 18:46] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.467203] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.482039] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.439838] kauditd_printk_skb: 2 callbacks suppressed
	[Jul17 18:47] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [55cd1756ecfba2901a7efe8c295c4097a44e9c86fa2512d2b65d8f0e708444b7] <==
	* {"level":"info","ts":"2023-07-17T18:45:43.327Z","caller":"traceutil/trace.go:171","msg":"trace[1468252766] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:984; }","duration":"106.921483ms","start":"2023-07-17T18:45:43.221Z","end":"2023-07-17T18:45:43.327Z","steps":["trace[1468252766] 'agreement among raft nodes before linearized reading'  (duration: 106.673975ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T18:46:08.307Z","caller":"traceutil/trace.go:171","msg":"trace[173614892] linearizableReadLoop","detail":"{readStateIndex:1144; appliedIndex:1143; }","duration":"331.343687ms","start":"2023-07-17T18:46:07.975Z","end":"2023-07-17T18:46:08.307Z","steps":["trace[173614892] 'read index received'  (duration: 331.197154ms)","trace[173614892] 'applied index is now lower than readState.Index'  (duration: 146.149µs)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T18:46:08.307Z","caller":"traceutil/trace.go:171","msg":"trace[1902718674] transaction","detail":"{read_only:false; response_revision:1104; number_of_response:1; }","duration":"437.945846ms","start":"2023-07-17T18:46:07.869Z","end":"2023-07-17T18:46:08.307Z","steps":["trace[1902718674] 'process raft request'  (duration: 437.406256ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T18:46:08.307Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T18:46:07.869Z","time spent":"438.094753ms","remote":"127.0.0.1:44068","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3246,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/cloud-spanner-emulator-88647b4cb-sv7gn\" mod_revision:1102 > success:<request_put:<key:\"/registry/pods/default/cloud-spanner-emulator-88647b4cb-sv7gn\" value_size:3177 >> failure:<request_range:<key:\"/registry/pods/default/cloud-spanner-emulator-88647b4cb-sv7gn\" > >"}
	{"level":"warn","ts":"2023-07-17T18:46:08.308Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.631967ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2023-07-17T18:46:08.310Z","caller":"traceutil/trace.go:171","msg":"trace[848631255] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1104; }","duration":"308.921704ms","start":"2023-07-17T18:46:08.001Z","end":"2023-07-17T18:46:08.310Z","steps":["trace[848631255] 'agreement among raft nodes before linearized reading'  (duration: 307.516051ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T18:46:08.310Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T18:46:08.001Z","time spent":"308.999179ms","remote":"127.0.0.1:44058","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":845,"request content":"key:\"/registry/persistentvolumeclaims/default/hpvc\" "}
	{"level":"warn","ts":"2023-07-17T18:46:08.309Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.783975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:339"}
	{"level":"info","ts":"2023-07-17T18:46:08.315Z","caller":"traceutil/trace.go:171","msg":"trace[1369436297] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:1104; }","duration":"339.73047ms","start":"2023-07-17T18:46:07.975Z","end":"2023-07-17T18:46:08.315Z","steps":["trace[1369436297] 'agreement among raft nodes before linearized reading'  (duration: 333.736277ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T18:46:08.315Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T18:46:07.975Z","time spent":"339.822677ms","remote":"127.0.0.1:44062","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":362,"request content":"key:\"/registry/namespaces/default\" "}
	{"level":"warn","ts":"2023-07-17T18:46:08.309Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.869897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81343"}
	{"level":"info","ts":"2023-07-17T18:46:08.315Z","caller":"traceutil/trace.go:171","msg":"trace[625190988] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1104; }","duration":"120.924236ms","start":"2023-07-17T18:46:08.195Z","end":"2023-07-17T18:46:08.315Z","steps":["trace[625190988] 'agreement among raft nodes before linearized reading'  (duration: 114.692693ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T18:46:08.309Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.399188ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"info","ts":"2023-07-17T18:46:08.321Z","caller":"traceutil/trace.go:171","msg":"trace[35297495] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1104; }","duration":"248.487161ms","start":"2023-07-17T18:46:08.072Z","end":"2023-07-17T18:46:08.321Z","steps":["trace[35297495] 'agreement among raft nodes before linearized reading'  (duration: 237.379473ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T18:46:18.891Z","caller":"traceutil/trace.go:171","msg":"trace[1456019604] linearizableReadLoop","detail":"{readStateIndex:1270; appliedIndex:1269; }","duration":"384.261164ms","start":"2023-07-17T18:46:18.506Z","end":"2023-07-17T18:46:18.891Z","steps":["trace[1456019604] 'read index received'  (duration: 383.757865ms)","trace[1456019604] 'applied index is now lower than readState.Index'  (duration: 393.357µs)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T18:46:18.892Z","caller":"traceutil/trace.go:171","msg":"trace[771578620] transaction","detail":"{read_only:false; response_revision:1223; number_of_response:1; }","duration":"406.774693ms","start":"2023-07-17T18:46:18.485Z","end":"2023-07-17T18:46:18.892Z","steps":["trace[771578620] 'process raft request'  (duration: 405.695516ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T18:46:18.892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T18:46:18.485Z","time spent":"406.919147ms","remote":"127.0.0.1:44090","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1154 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2023-07-17T18:46:18.892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"385.542975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5527"}
	{"level":"info","ts":"2023-07-17T18:46:18.892Z","caller":"traceutil/trace.go:171","msg":"trace[1622218686] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1223; }","duration":"385.594363ms","start":"2023-07-17T18:46:18.506Z","end":"2023-07-17T18:46:18.892Z","steps":["trace[1622218686] 'agreement among raft nodes before linearized reading'  (duration: 385.48835ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T18:46:18.892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T18:46:18.506Z","time spent":"385.672239ms","remote":"127.0.0.1:44068","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":5550,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2023-07-17T18:46:18.892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.326206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T18:46:18.893Z","caller":"traceutil/trace.go:171","msg":"trace[277887194] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1223; }","duration":"362.41043ms","start":"2023-07-17T18:46:18.530Z","end":"2023-07-17T18:46:18.893Z","steps":["trace[277887194] 'agreement among raft nodes before linearized reading'  (duration: 362.215584ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T18:46:18.893Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T18:46:18.530Z","time spent":"362.519824ms","remote":"127.0.0.1:44036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-07-17T18:46:28.181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.473649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:339"}
	{"level":"info","ts":"2023-07-17T18:46:28.181Z","caller":"traceutil/trace.go:171","msg":"trace[457561345] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:1299; }","duration":"203.845491ms","start":"2023-07-17T18:46:27.977Z","end":"2023-07-17T18:46:28.181Z","steps":["trace[457561345] 'range keys from in-memory index tree'  (duration: 203.190856ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [f28fea37ea3ff3a176237e04a113e4472f51e2ac954a170e4cb6fc0d9b46a478] <==
	* 2023/07/17 18:45:58 GCP Auth Webhook started!
	2023/07/17 18:46:01 Ready to marshal response ...
	2023/07/17 18:46:01 Ready to write response ...
	2023/07/17 18:46:02 Ready to marshal response ...
	2023/07/17 18:46:02 Ready to write response ...
	2023/07/17 18:46:02 Ready to marshal response ...
	2023/07/17 18:46:02 Ready to write response ...
	2023/07/17 18:46:05 Ready to marshal response ...
	2023/07/17 18:46:05 Ready to write response ...
	2023/07/17 18:46:10 Ready to marshal response ...
	2023/07/17 18:46:10 Ready to write response ...
	2023/07/17 18:46:14 Ready to marshal response ...
	2023/07/17 18:46:14 Ready to write response ...
	2023/07/17 18:46:17 Ready to marshal response ...
	2023/07/17 18:46:17 Ready to write response ...
	2023/07/17 18:46:41 Ready to marshal response ...
	2023/07/17 18:46:41 Ready to write response ...
	2023/07/17 18:48:45 Ready to marshal response ...
	2023/07/17 18:48:45 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:48:57 up 5 min,  0 users,  load average: 0.91, 1.72, 0.91
	Linux addons-962955 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f82d8789057484631a5734bef91278461e451f2e9626d361b55474230203c236] <==
	* I0717 18:46:59.965180       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:46:59.965319       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:46:59.988351       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:46:59.988475       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:47:00.034209       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:47:00.034372       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:47:00.038758       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:47:00.039524       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:47:00.074785       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:47:00.074916       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:47:00.104184       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:47:00.104376       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:47:00.211156       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:47:00.211273       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:47:00.240700       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:47:00.240976       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 18:47:01.038426       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 18:47:01.241052       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 18:47:01.251341       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0717 18:47:22.484970       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0717 18:47:22.485026       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 18:47:22.485060       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 18:47:22.485068       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 18:48:45.628768       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.111.17.85]
	
	* 
	* ==> kube-controller-manager [3eed3bed9358a2c4d1c4345919c221238c399c1ba82c35757488923fa9df4279] <==
	* E0717 18:47:19.477621       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:47:33.009629       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:47:33.009764       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:47:34.068011       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:47:34.068109       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:47:40.535259       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:47:40.535334       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:47:40.664200       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:47:40.664297       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:48:06.998305       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:48:06.998642       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:48:07.550011       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:48:07.550139       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:48:21.735648       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:48:21.735787       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:48:38.139793       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:48:38.140026       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:48:44.798109       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:48:44.798206       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 18:48:45.371109       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0717 18:48:45.420045       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-skxnl"
	I0717 18:48:48.547322       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0717 18:48:48.560293       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0717 18:48:53.291038       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:48:53.291105       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [58ebd34c56b382ce5821084b795b713d1b3e8ba6850973297fa14ad86287f6cb] <==
	* I0717 18:44:53.575803       1 node.go:141] Successfully retrieved node IP: 192.168.39.215
	I0717 18:44:53.576012       1 server_others.go:110] "Detected node IP" address="192.168.39.215"
	I0717 18:44:53.576037       1 server_others.go:554] "Using iptables proxy"
	I0717 18:44:54.135159       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 18:44:54.135268       1 server_others.go:192] "Using iptables Proxier"
	I0717 18:44:54.135378       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:44:54.136447       1 server.go:658] "Version info" version="v1.27.3"
	I0717 18:44:54.136545       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:44:54.144497       1 config.go:188] "Starting service config controller"
	I0717 18:44:54.144726       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 18:44:54.145041       1 config.go:97] "Starting endpoint slice config controller"
	I0717 18:44:54.145070       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 18:44:54.157934       1 config.go:315] "Starting node config controller"
	I0717 18:44:54.158253       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 18:44:54.245335       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 18:44:54.245611       1 shared_informer.go:318] Caches are synced for service config
	I0717 18:44:54.270797       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [780aa100b582282e4bc349f63bd05143d8aa40d1e82edb3c2f0fe232ff286825] <==
	* W0717 18:44:16.137695       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:44:16.137783       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:44:16.138190       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:44:16.138259       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:44:16.138570       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:44:16.138688       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 18:44:16.952076       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:44:16.952242       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 18:44:16.975083       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:44:16.975178       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:44:17.100427       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:44:17.100525       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:44:17.139158       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:44:17.139255       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 18:44:17.150033       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:44:17.150225       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:44:17.181087       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:44:17.181187       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:44:17.303566       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:44:17.303945       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:44:17.391117       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:44:17.391180       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 18:44:17.411683       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:44:17.411736       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0717 18:44:18.894046       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 18:43:45 UTC, ends at Mon 2023-07-17 18:48:57 UTC. --
	Jul 17 18:48:45 addons-962955 kubelet[1253]: I0717 18:48:45.448317    1253 memory_manager.go:346] "RemoveStaleState removing state" podUID="8d7aebdd-9397-4c1d-92b5-d58ba8f52206" containerName="csi-provisioner"
	Jul 17 18:48:45 addons-962955 kubelet[1253]: I0717 18:48:45.516130    1253 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vjxq\" (UniqueName: \"kubernetes.io/projected/96c6d749-3842-4f90-9830-9f8a72df7c80-kube-api-access-4vjxq\") pod \"hello-world-app-65bdb79f98-skxnl\" (UID: \"96c6d749-3842-4f90-9830-9f8a72df7c80\") " pod="default/hello-world-app-65bdb79f98-skxnl"
	Jul 17 18:48:45 addons-962955 kubelet[1253]: I0717 18:48:45.516236    1253 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/96c6d749-3842-4f90-9830-9f8a72df7c80-gcp-creds\") pod \"hello-world-app-65bdb79f98-skxnl\" (UID: \"96c6d749-3842-4f90-9830-9f8a72df7c80\") " pod="default/hello-world-app-65bdb79f98-skxnl"
	Jul 17 18:48:47 addons-962955 kubelet[1253]: I0717 18:48:47.028569    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbfx\" (UniqueName: \"kubernetes.io/projected/8370a070-a195-4a63-8f95-3fa42711cefa-kube-api-access-jhbfx\") pod \"8370a070-a195-4a63-8f95-3fa42711cefa\" (UID: \"8370a070-a195-4a63-8f95-3fa42711cefa\") "
	Jul 17 18:48:47 addons-962955 kubelet[1253]: I0717 18:48:47.033267    1253 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8370a070-a195-4a63-8f95-3fa42711cefa-kube-api-access-jhbfx" (OuterVolumeSpecName: "kube-api-access-jhbfx") pod "8370a070-a195-4a63-8f95-3fa42711cefa" (UID: "8370a070-a195-4a63-8f95-3fa42711cefa"). InnerVolumeSpecName "kube-api-access-jhbfx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 18:48:47 addons-962955 kubelet[1253]: I0717 18:48:47.129151    1253 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jhbfx\" (UniqueName: \"kubernetes.io/projected/8370a070-a195-4a63-8f95-3fa42711cefa-kube-api-access-jhbfx\") on node \"addons-962955\" DevicePath \"\""
	Jul 17 18:48:47 addons-962955 kubelet[1253]: I0717 18:48:47.619474    1253 scope.go:115] "RemoveContainer" containerID="a443642c3d7eeefd41d2dc833dd649a89a0b956a96971fc26c7bb8a5b8edc269"
	Jul 17 18:48:47 addons-962955 kubelet[1253]: I0717 18:48:47.812944    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=8370a070-a195-4a63-8f95-3fa42711cefa path="/var/lib/kubelet/pods/8370a070-a195-4a63-8f95-3fa42711cefa/volumes"
	Jul 17 18:48:48 addons-962955 kubelet[1253]: I0717 18:48:48.016933    1253 scope.go:115] "RemoveContainer" containerID="a443642c3d7eeefd41d2dc833dd649a89a0b956a96971fc26c7bb8a5b8edc269"
	Jul 17 18:48:48 addons-962955 kubelet[1253]: E0717 18:48:48.017642    1253 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a443642c3d7eeefd41d2dc833dd649a89a0b956a96971fc26c7bb8a5b8edc269\": container with ID starting with a443642c3d7eeefd41d2dc833dd649a89a0b956a96971fc26c7bb8a5b8edc269 not found: ID does not exist" containerID="a443642c3d7eeefd41d2dc833dd649a89a0b956a96971fc26c7bb8a5b8edc269"
	Jul 17 18:48:48 addons-962955 kubelet[1253]: I0717 18:48:48.017687    1253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:a443642c3d7eeefd41d2dc833dd649a89a0b956a96971fc26c7bb8a5b8edc269} err="failed to get container status \"a443642c3d7eeefd41d2dc833dd649a89a0b956a96971fc26c7bb8a5b8edc269\": rpc error: code = NotFound desc = could not find container \"a443642c3d7eeefd41d2dc833dd649a89a0b956a96971fc26c7bb8a5b8edc269\": container with ID starting with a443642c3d7eeefd41d2dc833dd649a89a0b956a96971fc26c7bb8a5b8edc269 not found: ID does not exist"
	Jul 17 18:48:48 addons-962955 kubelet[1253]: E0717 18:48:48.581078    1253 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-vmpkq.1772bc30ef4cd7ec", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-vmpkq", UID:"00a93ffe-c6e6-42bb-9bb1-4e5f6373a052", APIVersion:"v1", ResourceVersion:"643", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-962955"}, FirstTimestamp:time.Date(2023, time.July, 17, 18, 48, 48, 576600044, time.Local), LastTimestamp:time.Date(2023, time.July, 17, 18, 48, 48, 576600044, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-vmpkq.1772bc30ef4cd7ec" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 18:48:49 addons-962955 kubelet[1253]: I0717 18:48:49.808947    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=28640f5f-0678-4cc8-a907-c28e9be776ba path="/var/lib/kubelet/pods/28640f5f-0678-4cc8-a907-c28e9be776ba/volumes"
	Jul 17 18:48:49 addons-962955 kubelet[1253]: I0717 18:48:49.810545    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d047f76a-35cb-4426-8e6b-e7af3ce13f3e path="/var/lib/kubelet/pods/d047f76a-35cb-4426-8e6b-e7af3ce13f3e/volumes"
	Jul 17 18:48:49 addons-962955 kubelet[1253]: I0717 18:48:49.951177    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00a93ffe-c6e6-42bb-9bb1-4e5f6373a052-webhook-cert\") pod \"00a93ffe-c6e6-42bb-9bb1-4e5f6373a052\" (UID: \"00a93ffe-c6e6-42bb-9bb1-4e5f6373a052\") "
	Jul 17 18:48:49 addons-962955 kubelet[1253]: I0717 18:48:49.951236    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqg67\" (UniqueName: \"kubernetes.io/projected/00a93ffe-c6e6-42bb-9bb1-4e5f6373a052-kube-api-access-zqg67\") pod \"00a93ffe-c6e6-42bb-9bb1-4e5f6373a052\" (UID: \"00a93ffe-c6e6-42bb-9bb1-4e5f6373a052\") "
	Jul 17 18:48:49 addons-962955 kubelet[1253]: I0717 18:48:49.956550    1253 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a93ffe-c6e6-42bb-9bb1-4e5f6373a052-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "00a93ffe-c6e6-42bb-9bb1-4e5f6373a052" (UID: "00a93ffe-c6e6-42bb-9bb1-4e5f6373a052"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 18:48:49 addons-962955 kubelet[1253]: I0717 18:48:49.957443    1253 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00a93ffe-c6e6-42bb-9bb1-4e5f6373a052-kube-api-access-zqg67" (OuterVolumeSpecName: "kube-api-access-zqg67") pod "00a93ffe-c6e6-42bb-9bb1-4e5f6373a052" (UID: "00a93ffe-c6e6-42bb-9bb1-4e5f6373a052"). InnerVolumeSpecName "kube-api-access-zqg67". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 18:48:50 addons-962955 kubelet[1253]: I0717 18:48:50.052533    1253 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00a93ffe-c6e6-42bb-9bb1-4e5f6373a052-webhook-cert\") on node \"addons-962955\" DevicePath \"\""
	Jul 17 18:48:50 addons-962955 kubelet[1253]: I0717 18:48:50.052582    1253 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zqg67\" (UniqueName: \"kubernetes.io/projected/00a93ffe-c6e6-42bb-9bb1-4e5f6373a052-kube-api-access-zqg67\") on node \"addons-962955\" DevicePath \"\""
	Jul 17 18:48:50 addons-962955 kubelet[1253]: I0717 18:48:50.643379    1253 scope.go:115] "RemoveContainer" containerID="8c970140f8a96a19c5f9b84cc41fe4bbb8ee6e31842117a7e74a9c893a828ed4"
	Jul 17 18:48:50 addons-962955 kubelet[1253]: I0717 18:48:50.677284    1253 scope.go:115] "RemoveContainer" containerID="8c970140f8a96a19c5f9b84cc41fe4bbb8ee6e31842117a7e74a9c893a828ed4"
	Jul 17 18:48:50 addons-962955 kubelet[1253]: E0717 18:48:50.678029    1253 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c970140f8a96a19c5f9b84cc41fe4bbb8ee6e31842117a7e74a9c893a828ed4\": container with ID starting with 8c970140f8a96a19c5f9b84cc41fe4bbb8ee6e31842117a7e74a9c893a828ed4 not found: ID does not exist" containerID="8c970140f8a96a19c5f9b84cc41fe4bbb8ee6e31842117a7e74a9c893a828ed4"
	Jul 17 18:48:50 addons-962955 kubelet[1253]: I0717 18:48:50.678097    1253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:8c970140f8a96a19c5f9b84cc41fe4bbb8ee6e31842117a7e74a9c893a828ed4} err="failed to get container status \"8c970140f8a96a19c5f9b84cc41fe4bbb8ee6e31842117a7e74a9c893a828ed4\": rpc error: code = NotFound desc = could not find container \"8c970140f8a96a19c5f9b84cc41fe4bbb8ee6e31842117a7e74a9c893a828ed4\": container with ID starting with 8c970140f8a96a19c5f9b84cc41fe4bbb8ee6e31842117a7e74a9c893a828ed4 not found: ID does not exist"
	Jul 17 18:48:51 addons-962955 kubelet[1253]: I0717 18:48:51.807362    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=00a93ffe-c6e6-42bb-9bb1-4e5f6373a052 path="/var/lib/kubelet/pods/00a93ffe-c6e6-42bb-9bb1-4e5f6373a052/volumes"
	
	* 
	* ==> storage-provisioner [088f919f2ee5230339b8b8cec2b04226fc0c7fadbc1ca9b04f3592c4ac88460c] <==
	* I0717 18:45:22.499812       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:45:22.519765       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:45:22.521140       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:45:22.562170       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:45:22.562758       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-962955_93df5ab1-57a9-4ce2-82d1-a71a1abaf5cf!
	I0717 18:45:22.574504       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c8ce3801-71bc-4b32-9845-1ebcd711f027", APIVersion:"v1", ResourceVersion:"866", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-962955_93df5ab1-57a9-4ce2-82d1-a71a1abaf5cf became leader
	I0717 18:45:22.663415       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-962955_93df5ab1-57a9-4ce2-82d1-a71a1abaf5cf!
	
	* 
	* ==> storage-provisioner [72e8253e3d276bc05695ef7cd810dcb2eadf498290410cde63b20facd0321b77] <==
	* I0717 18:44:50.904518       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 18:45:20.918446       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-962955 -n addons-962955
helpers_test.go:261: (dbg) Run:  kubectl --context addons-962955 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (161.55s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.67s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-962955
addons_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-962955: exit status 82 (2m0.848843766s)

                                                
                                                
-- stdout --
	* Stopping node "addons-962955"  ...
	* Stopping node "addons-962955"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:150: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-962955" : exit status 82
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-962955
addons_test.go:152: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-962955: exit status 11 (21.532850154s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:154: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-962955" : exit status 11
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-962955
addons_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-962955: exit status 11 (6.144223268s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:158: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-962955" : exit status 11
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-962955
addons_test.go:161: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-962955: exit status 11 (6.142085819s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:163: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-962955" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.67s)

                                                
                                    
x
+
TestErrorSpam/setup (50.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-217504 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-217504 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-217504 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-217504 --driver=kvm2  --container-runtime=crio: (50.514249729s)
error_spam_test.go:96: unexpected stderr: "! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1"
error_spam_test.go:110: minikube stdout:
* [nospam-217504] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=16890
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting control plane node nospam-217504 in cluster nospam-217504
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-217504" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
--- FAIL: TestErrorSpam/setup (50.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (169.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-946642 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-946642 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.272508455s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-946642 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-946642 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c50750ac-5831-4d65-94e3-c90c66a1eaae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c50750ac-5831-4d65-94e3-c90c66a1eaae] Running
E0717 19:01:27.821326 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.010602827s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-946642 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0717 19:03:01.330974 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:03:01.336340 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:03:01.346706 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:03:01.367108 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:03:01.407537 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:03:01.487987 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:03:01.648565 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:03:01.969230 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:03:02.610266 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:03:03.890883 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:03:06.451831 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:03:11.572293 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:03:21.813326 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-946642 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.617690644s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-946642 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-946642 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.20
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-946642 addons disable ingress-dns --alsologtostderr -v=1
E0717 19:03:42.294388 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-946642 addons disable ingress-dns --alsologtostderr -v=1: (3.92012044s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-946642 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-946642 addons disable ingress --alsologtostderr -v=1: (7.628610562s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-946642 -n ingress-addon-legacy-946642
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-946642 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-946642 logs -n 25: (1.346468141s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-685960 image load                                              | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| cp             | functional-685960 cp                                                      | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | testdata/cp-test.txt                                                      |                             |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                  |                             |         |         |                     |                     |
	| ssh            | functional-685960 ssh -n                                                  | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | functional-685960 sudo cat                                                |                             |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                  |                             |         |         |                     |                     |
	| cp             | functional-685960 cp                                                      | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | functional-685960:/home/docker/cp-test.txt                                |                             |         |         |                     |                     |
	|                | /tmp/TestFunctionalparallelCpCmd1266771736/001/cp-test.txt                |                             |         |         |                     |                     |
	| ssh            | functional-685960 ssh -n                                                  | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | functional-685960 sudo cat                                                |                             |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                  |                             |         |         |                     |                     |
	| image          | functional-685960 image ls                                                | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	| image          | functional-685960 image save --daemon                                     | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-685960                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| update-context | functional-685960                                                         | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-685960                                                         | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-685960                                                         | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-685960                                                         | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-685960                                                         | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-685960                                                         | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-685960 ssh pgrep                                               | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-685960                                                         | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-685960 image build -t                                          | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|                | localhost/my-image:functional-685960                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-685960 image ls                                                | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	| delete         | -p functional-685960                                                      | functional-685960           | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	| start          | -p ingress-addon-legacy-946642                                            | ingress-addon-legacy-946642 | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 19:00 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-946642                                               | ingress-addon-legacy-946642 | jenkins | v1.30.1 | 17 Jul 23 19:00 UTC | 17 Jul 23 19:01 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-946642                                               | ingress-addon-legacy-946642 | jenkins | v1.30.1 | 17 Jul 23 19:01 UTC | 17 Jul 23 19:01 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-946642                                               | ingress-addon-legacy-946642 | jenkins | v1.30.1 | 17 Jul 23 19:01 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-946642 ip                                            | ingress-addon-legacy-946642 | jenkins | v1.30.1 | 17 Jul 23 19:03 UTC | 17 Jul 23 19:03 UTC |
	| addons         | ingress-addon-legacy-946642                                               | ingress-addon-legacy-946642 | jenkins | v1.30.1 | 17 Jul 23 19:03 UTC | 17 Jul 23 19:03 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-946642                                               | ingress-addon-legacy-946642 | jenkins | v1.30.1 | 17 Jul 23 19:03 UTC | 17 Jul 23 19:03 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 18:58:58
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:58:58.109343 1077216 out.go:296] Setting OutFile to fd 1 ...
	I0717 18:58:58.109524 1077216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:58:58.109536 1077216 out.go:309] Setting ErrFile to fd 2...
	I0717 18:58:58.109543 1077216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:58:58.109787 1077216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 18:58:58.110469 1077216 out.go:303] Setting JSON to false
	I0717 18:58:58.111588 1077216 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13289,"bootTime":1689607049,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:58:58.111664 1077216 start.go:138] virtualization: kvm guest
	I0717 18:58:58.114932 1077216 out.go:177] * [ingress-addon-legacy-946642] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:58:58.117404 1077216 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 18:58:58.117400 1077216 notify.go:220] Checking for updates...
	I0717 18:58:58.119796 1077216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:58:58.122154 1077216 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 18:58:58.124872 1077216 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 18:58:58.127041 1077216 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:58:58.129027 1077216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:58:58.131314 1077216 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 18:58:58.171067 1077216 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 18:58:58.173211 1077216 start.go:298] selected driver: kvm2
	I0717 18:58:58.173235 1077216 start.go:880] validating driver "kvm2" against <nil>
	I0717 18:58:58.173246 1077216 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:58:58.173987 1077216 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:58:58.174074 1077216 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:58:58.189981 1077216 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0717 18:58:58.190047 1077216 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 18:58:58.190257 1077216 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:58:58.190285 1077216 cni.go:84] Creating CNI manager for ""
	I0717 18:58:58.190299 1077216 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:58:58.190316 1077216 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 18:58:58.190324 1077216 start_flags.go:319] config:
	{Name:ingress-addon-legacy-946642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-946642 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 18:58:58.190463 1077216 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:58:58.193012 1077216 out.go:177] * Starting control plane node ingress-addon-legacy-946642 in cluster ingress-addon-legacy-946642
	I0717 18:58:58.195077 1077216 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 18:58:58.219437 1077216 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0717 18:58:58.219489 1077216 cache.go:57] Caching tarball of preloaded images
	I0717 18:58:58.219671 1077216 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 18:58:58.222410 1077216 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0717 18:58:58.224295 1077216 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:58:58.252267 1077216 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0717 18:59:02.567262 1077216 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:59:02.567365 1077216 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:59:03.534523 1077216 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0717 18:59:03.534937 1077216 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/config.json ...
	I0717 18:59:03.534975 1077216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/config.json: {Name:mk5a82c0b3fc1ebccb8b9ee5c2d65c473dee3a5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:59:03.535161 1077216 start.go:365] acquiring machines lock for ingress-addon-legacy-946642: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:59:03.535198 1077216 start.go:369] acquired machines lock for "ingress-addon-legacy-946642" in 18.02µs
	I0717 18:59:03.535217 1077216 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-946642 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName
:ingress-addon-legacy-946642 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:59:03.535292 1077216 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 18:59:03.537923 1077216 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0717 18:59:03.538151 1077216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:59:03.538209 1077216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:59:03.553471 1077216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32877
	I0717 18:59:03.554077 1077216 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:59:03.554769 1077216 main.go:141] libmachine: Using API Version  1
	I0717 18:59:03.554798 1077216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:59:03.555220 1077216 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:59:03.555439 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetMachineName
	I0717 18:59:03.555611 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .DriverName
	I0717 18:59:03.555786 1077216 start.go:159] libmachine.API.Create for "ingress-addon-legacy-946642" (driver="kvm2")
	I0717 18:59:03.555823 1077216 client.go:168] LocalClient.Create starting
	I0717 18:59:03.555867 1077216 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem
	I0717 18:59:03.555909 1077216 main.go:141] libmachine: Decoding PEM data...
	I0717 18:59:03.555926 1077216 main.go:141] libmachine: Parsing certificate...
	I0717 18:59:03.555985 1077216 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem
	I0717 18:59:03.556005 1077216 main.go:141] libmachine: Decoding PEM data...
	I0717 18:59:03.556017 1077216 main.go:141] libmachine: Parsing certificate...
	I0717 18:59:03.556043 1077216 main.go:141] libmachine: Running pre-create checks...
	I0717 18:59:03.556053 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .PreCreateCheck
	I0717 18:59:03.556440 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetConfigRaw
	I0717 18:59:03.556898 1077216 main.go:141] libmachine: Creating machine...
	I0717 18:59:03.556915 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .Create
	I0717 18:59:03.557057 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Creating KVM machine...
	I0717 18:59:03.558773 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found existing default KVM network
	I0717 18:59:03.559698 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:03.559539 1077249 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d770}
	I0717 18:59:03.565986 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | trying to create private KVM network mk-ingress-addon-legacy-946642 192.168.39.0/24...
	I0717 18:59:03.643540 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Setting up store path in /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642 ...
	I0717 18:59:03.643584 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | private KVM network mk-ingress-addon-legacy-946642 192.168.39.0/24 created
	I0717 18:59:03.643598 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Building disk image from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 18:59:03.643619 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:03.643452 1077249 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 18:59:03.643691 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Downloading /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0717 18:59:03.894728 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:03.894574 1077249 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642/id_rsa...
	I0717 18:59:04.053181 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:04.053032 1077249 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642/ingress-addon-legacy-946642.rawdisk...
	I0717 18:59:04.053215 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Writing magic tar header
	I0717 18:59:04.053241 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Writing SSH key tar header
	I0717 18:59:04.053355 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:04.053280 1077249 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642 ...
	I0717 18:59:04.054051 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642
	I0717 18:59:04.054118 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642 (perms=drwx------)
	I0717 18:59:04.054133 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines
	I0717 18:59:04.054150 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 18:59:04.054161 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725
	I0717 18:59:04.054179 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:59:04.054194 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:59:04.054205 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:59:04.054220 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube (perms=drwxr-xr-x)
	I0717 18:59:04.054235 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725 (perms=drwxrwxr-x)
	I0717 18:59:04.054253 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:59:04.054266 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:59:04.054279 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Checking permissions on dir: /home
	I0717 18:59:04.054292 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Creating domain...
	I0717 18:59:04.054305 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Skipping /home - not owner
	I0717 18:59:04.055384 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) define libvirt domain using xml: 
	I0717 18:59:04.055412 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) <domain type='kvm'>
	I0717 18:59:04.055422 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)   <name>ingress-addon-legacy-946642</name>
	I0717 18:59:04.055428 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)   <memory unit='MiB'>4096</memory>
	I0717 18:59:04.055444 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)   <vcpu>2</vcpu>
	I0717 18:59:04.055454 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)   <features>
	I0717 18:59:04.055461 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <acpi/>
	I0717 18:59:04.055469 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <apic/>
	I0717 18:59:04.055479 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <pae/>
	I0717 18:59:04.055488 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     
	I0717 18:59:04.055495 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)   </features>
	I0717 18:59:04.055501 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)   <cpu mode='host-passthrough'>
	I0717 18:59:04.055538 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)   
	I0717 18:59:04.055570 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)   </cpu>
	I0717 18:59:04.055587 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)   <os>
	I0717 18:59:04.055607 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <type>hvm</type>
	I0717 18:59:04.055622 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <boot dev='cdrom'/>
	I0717 18:59:04.055636 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <boot dev='hd'/>
	I0717 18:59:04.055651 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <bootmenu enable='no'/>
	I0717 18:59:04.055669 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)   </os>
	I0717 18:59:04.055683 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)   <devices>
	I0717 18:59:04.055699 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <disk type='file' device='cdrom'>
	I0717 18:59:04.055720 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642/boot2docker.iso'/>
	I0717 18:59:04.055735 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)       <target dev='hdc' bus='scsi'/>
	I0717 18:59:04.055809 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)       <readonly/>
	I0717 18:59:04.055847 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     </disk>
	I0717 18:59:04.055864 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <disk type='file' device='disk'>
	I0717 18:59:04.055879 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:59:04.055894 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642/ingress-addon-legacy-946642.rawdisk'/>
	I0717 18:59:04.055903 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)       <target dev='hda' bus='virtio'/>
	I0717 18:59:04.055910 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     </disk>
	I0717 18:59:04.055920 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <interface type='network'>
	I0717 18:59:04.055928 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)       <source network='mk-ingress-addon-legacy-946642'/>
	I0717 18:59:04.055937 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)       <model type='virtio'/>
	I0717 18:59:04.055955 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     </interface>
	I0717 18:59:04.055975 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <interface type='network'>
	I0717 18:59:04.055991 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)       <source network='default'/>
	I0717 18:59:04.056008 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)       <model type='virtio'/>
	I0717 18:59:04.056023 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     </interface>
	I0717 18:59:04.056043 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <serial type='pty'>
	I0717 18:59:04.056058 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)       <target port='0'/>
	I0717 18:59:04.056080 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     </serial>
	I0717 18:59:04.056098 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <console type='pty'>
	I0717 18:59:04.056114 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)       <target type='serial' port='0'/>
	I0717 18:59:04.056127 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     </console>
	I0717 18:59:04.056142 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     <rng model='virtio'>
	I0717 18:59:04.056157 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)       <backend model='random'>/dev/random</backend>
	I0717 18:59:04.056193 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     </rng>
	I0717 18:59:04.056213 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     
	I0717 18:59:04.056227 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)     
	I0717 18:59:04.056235 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642)   </devices>
	I0717 18:59:04.056241 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) </domain>
	I0717 18:59:04.056249 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) 
	I0717 18:59:04.061554 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:32:83:09 in network default
	I0717 18:59:04.062230 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:04.062249 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Ensuring networks are active...
	I0717 18:59:04.062993 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Ensuring network default is active
	I0717 18:59:04.063368 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Ensuring network mk-ingress-addon-legacy-946642 is active
	I0717 18:59:04.064007 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Getting domain xml...
	I0717 18:59:04.064984 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Creating domain...
	I0717 18:59:05.356136 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Waiting to get IP...
	I0717 18:59:05.357116 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:05.357727 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:05.357763 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:05.357701 1077249 retry.go:31] will retry after 258.211974ms: waiting for machine to come up
	I0717 18:59:05.617281 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:05.617994 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:05.618036 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:05.617918 1077249 retry.go:31] will retry after 344.665285ms: waiting for machine to come up
	I0717 18:59:05.964634 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:05.965030 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:05.965079 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:05.964965 1077249 retry.go:31] will retry after 396.660955ms: waiting for machine to come up
	I0717 18:59:06.363877 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:06.364459 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:06.364532 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:06.364421 1077249 retry.go:31] will retry after 416.37505ms: waiting for machine to come up
	I0717 18:59:06.782268 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:06.782803 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:06.782833 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:06.782736 1077249 retry.go:31] will retry after 702.033052ms: waiting for machine to come up
	I0717 18:59:07.486872 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:07.487412 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:07.487447 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:07.487352 1077249 retry.go:31] will retry after 731.697927ms: waiting for machine to come up
	I0717 18:59:08.220463 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:08.221019 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:08.221072 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:08.220943 1077249 retry.go:31] will retry after 1.017339318s: waiting for machine to come up
	I0717 18:59:09.240124 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:09.240596 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:09.240625 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:09.240554 1077249 retry.go:31] will retry after 1.003382691s: waiting for machine to come up
	I0717 18:59:10.245365 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:10.245893 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:10.245924 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:10.245807 1077249 retry.go:31] will retry after 1.861691006s: waiting for machine to come up
	I0717 18:59:12.109943 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:12.110553 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:12.110578 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:12.110487 1077249 retry.go:31] will retry after 2.263602998s: waiting for machine to come up
	I0717 18:59:14.376138 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:14.376683 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:14.376717 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:14.376632 1077249 retry.go:31] will retry after 2.167299599s: waiting for machine to come up
	I0717 18:59:16.547222 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:16.547822 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:16.547849 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:16.547774 1077249 retry.go:31] will retry after 3.384737439s: waiting for machine to come up
	I0717 18:59:19.935296 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:19.935940 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:19.935970 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:19.935842 1077249 retry.go:31] will retry after 3.120091702s: waiting for machine to come up
	I0717 18:59:23.060494 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:23.060997 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find current IP address of domain ingress-addon-legacy-946642 in network mk-ingress-addon-legacy-946642
	I0717 18:59:23.061026 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | I0717 18:59:23.060944 1077249 retry.go:31] will retry after 3.956272287s: waiting for machine to come up
	I0717 18:59:27.021855 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.022423 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Found IP for machine: 192.168.39.20
	I0717 18:59:27.022452 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Reserving static IP address...
	I0717 18:59:27.022478 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has current primary IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.022871 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-946642", mac: "52:54:00:63:dd:41", ip: "192.168.39.20"} in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.112183 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Getting to WaitForSSH function...
	I0717 18:59:27.112229 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Reserved static IP address: 192.168.39.20
	I0717 18:59:27.112285 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Waiting for SSH to be available...
	I0717 18:59:27.114953 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.115346 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:minikube Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:27.115383 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.115514 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Using SSH client type: external
	I0717 18:59:27.115560 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642/id_rsa (-rw-------)
	I0717 18:59:27.115596 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.20 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:59:27.115618 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | About to run SSH command:
	I0717 18:59:27.115636 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | exit 0
	I0717 18:59:27.213842 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | SSH cmd err, output: <nil>: 
	I0717 18:59:27.214122 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) KVM machine creation complete!
	I0717 18:59:27.214486 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetConfigRaw
	I0717 18:59:27.215129 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .DriverName
	I0717 18:59:27.215349 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .DriverName
	I0717 18:59:27.215522 1077216 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:59:27.215538 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetState
	I0717 18:59:27.217044 1077216 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:59:27.217059 1077216 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:59:27.217088 1077216 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:59:27.217098 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHHostname
	I0717 18:59:27.219818 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.220219 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:27.220249 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.220441 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHPort
	I0717 18:59:27.220686 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:27.220855 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:27.221017 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHUsername
	I0717 18:59:27.221194 1077216 main.go:141] libmachine: Using SSH client type: native
	I0717 18:59:27.221638 1077216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0717 18:59:27.221652 1077216 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:59:27.353226 1077216 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:59:27.353250 1077216 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:59:27.353260 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHHostname
	I0717 18:59:27.356786 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.357301 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:27.357354 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.357609 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHPort
	I0717 18:59:27.357915 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:27.358180 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:27.358347 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHUsername
	I0717 18:59:27.358554 1077216 main.go:141] libmachine: Using SSH client type: native
	I0717 18:59:27.359118 1077216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0717 18:59:27.359139 1077216 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:59:27.495035 1077216 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0717 18:59:27.495118 1077216 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:59:27.495134 1077216 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:59:27.495166 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetMachineName
	I0717 18:59:27.495556 1077216 buildroot.go:166] provisioning hostname "ingress-addon-legacy-946642"
	I0717 18:59:27.495585 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetMachineName
	I0717 18:59:27.495843 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHHostname
	I0717 18:59:27.498661 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.499028 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:27.499071 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.499233 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHPort
	I0717 18:59:27.499455 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:27.499652 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:27.499810 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHUsername
	I0717 18:59:27.499985 1077216 main.go:141] libmachine: Using SSH client type: native
	I0717 18:59:27.500590 1077216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0717 18:59:27.500614 1077216 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-946642 && echo "ingress-addon-legacy-946642" | sudo tee /etc/hostname
	I0717 18:59:27.642348 1077216 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-946642
	
	I0717 18:59:27.642388 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHHostname
	I0717 18:59:27.645681 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.646054 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:27.646092 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.646318 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHPort
	I0717 18:59:27.646570 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:27.646732 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:27.646889 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHUsername
	I0717 18:59:27.647049 1077216 main.go:141] libmachine: Using SSH client type: native
	I0717 18:59:27.647597 1077216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0717 18:59:27.647619 1077216 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-946642' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-946642/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-946642' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:59:27.786226 1077216 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:59:27.786270 1077216 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 18:59:27.786304 1077216 buildroot.go:174] setting up certificates
	I0717 18:59:27.786319 1077216 provision.go:83] configureAuth start
	I0717 18:59:27.786338 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetMachineName
	I0717 18:59:27.786706 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetIP
	I0717 18:59:27.789618 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.789929 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:27.789974 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.790172 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHHostname
	I0717 18:59:27.792825 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.793184 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:27.793219 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.793369 1077216 provision.go:138] copyHostCerts
	I0717 18:59:27.793408 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 18:59:27.793446 1077216 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 18:59:27.793455 1077216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 18:59:27.793537 1077216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 18:59:27.793643 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 18:59:27.793662 1077216 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 18:59:27.793669 1077216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 18:59:27.793695 1077216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 18:59:27.793752 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 18:59:27.793768 1077216 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 18:59:27.793774 1077216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 18:59:27.793796 1077216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 18:59:27.793842 1077216 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-946642 san=[192.168.39.20 192.168.39.20 localhost 127.0.0.1 minikube ingress-addon-legacy-946642]
	I0717 18:59:27.882858 1077216 provision.go:172] copyRemoteCerts
	I0717 18:59:27.882940 1077216 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:59:27.882968 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHHostname
	I0717 18:59:27.886466 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.886910 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:27.886945 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:27.887233 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHPort
	I0717 18:59:27.887506 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:27.887731 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHUsername
	I0717 18:59:27.887878 1077216 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642/id_rsa Username:docker}
	I0717 18:59:27.983804 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:59:27.983910 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:59:28.010128 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:59:28.010205 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0717 18:59:28.034965 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:59:28.035055 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:59:28.061336 1077216 provision.go:86] duration metric: configureAuth took 274.997389ms
	I0717 18:59:28.061369 1077216 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:59:28.061581 1077216 config.go:182] Loaded profile config "ingress-addon-legacy-946642": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0717 18:59:28.061661 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHHostname
	I0717 18:59:28.064954 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.065363 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:28.065398 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.065688 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHPort
	I0717 18:59:28.065894 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:28.066086 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:28.066255 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHUsername
	I0717 18:59:28.066476 1077216 main.go:141] libmachine: Using SSH client type: native
	I0717 18:59:28.066933 1077216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0717 18:59:28.066952 1077216 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:59:28.415120 1077216 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:59:28.415159 1077216 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:59:28.415171 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetURL
	I0717 18:59:28.416637 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Using libvirt version 6000000
	I0717 18:59:28.419472 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.419849 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:28.419874 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.420099 1077216 main.go:141] libmachine: Docker is up and running!
	I0717 18:59:28.420126 1077216 main.go:141] libmachine: Reticulating splines...
	I0717 18:59:28.420135 1077216 client.go:171] LocalClient.Create took 24.864299578s
	I0717 18:59:28.420160 1077216 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-946642" took 24.864375315s
	I0717 18:59:28.420170 1077216 start.go:300] post-start starting for "ingress-addon-legacy-946642" (driver="kvm2")
	I0717 18:59:28.420186 1077216 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:59:28.420206 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .DriverName
	I0717 18:59:28.420499 1077216 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:59:28.420534 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHHostname
	I0717 18:59:28.422575 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.422952 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:28.422987 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.423216 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHPort
	I0717 18:59:28.423452 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:28.423613 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHUsername
	I0717 18:59:28.423760 1077216 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642/id_rsa Username:docker}
	I0717 18:59:28.520737 1077216 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:59:28.525713 1077216 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 18:59:28.525748 1077216 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 18:59:28.525819 1077216 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 18:59:28.525907 1077216 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 18:59:28.525919 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> /etc/ssl/certs/10689542.pem
	I0717 18:59:28.526017 1077216 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:59:28.536606 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 18:59:28.561189 1077216 start.go:303] post-start completed in 141.001002ms
	I0717 18:59:28.561256 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetConfigRaw
	I0717 18:59:28.561906 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetIP
	I0717 18:59:28.564657 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.565042 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:28.565087 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.565371 1077216 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/config.json ...
	I0717 18:59:28.565582 1077216 start.go:128] duration metric: createHost completed in 25.03028062s
	I0717 18:59:28.565605 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHHostname
	I0717 18:59:28.568042 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.568398 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:28.568426 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.568665 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHPort
	I0717 18:59:28.568879 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:28.569085 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:28.569272 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHUsername
	I0717 18:59:28.569454 1077216 main.go:141] libmachine: Using SSH client type: native
	I0717 18:59:28.569901 1077216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0717 18:59:28.569916 1077216 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:59:28.706823 1077216 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689620368.686428200
	
	I0717 18:59:28.706852 1077216 fix.go:206] guest clock: 1689620368.686428200
	I0717 18:59:28.706863 1077216 fix.go:219] Guest: 2023-07-17 18:59:28.6864282 +0000 UTC Remote: 2023-07-17 18:59:28.565592561 +0000 UTC m=+30.492674450 (delta=120.835639ms)
	I0717 18:59:28.706893 1077216 fix.go:190] guest clock delta is within tolerance: 120.835639ms
	I0717 18:59:28.706899 1077216 start.go:83] releasing machines lock for "ingress-addon-legacy-946642", held for 25.171690419s
	I0717 18:59:28.706927 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .DriverName
	I0717 18:59:28.707304 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetIP
	I0717 18:59:28.711036 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.711666 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:28.711717 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.712100 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .DriverName
	I0717 18:59:28.712934 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .DriverName
	I0717 18:59:28.713213 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .DriverName
	I0717 18:59:28.713324 1077216 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:59:28.713403 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHHostname
	I0717 18:59:28.713567 1077216 ssh_runner.go:195] Run: cat /version.json
	I0717 18:59:28.713602 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHHostname
	I0717 18:59:28.717185 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.717458 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.717744 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:28.717779 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.717946 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:28.717978 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:28.717980 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHPort
	I0717 18:59:28.718180 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHPort
	I0717 18:59:28.718294 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:28.718394 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 18:59:28.718564 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHUsername
	I0717 18:59:28.718583 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHUsername
	I0717 18:59:28.718742 1077216 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642/id_rsa Username:docker}
	I0717 18:59:28.718798 1077216 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642/id_rsa Username:docker}
	W0717 18:59:28.833765 1077216 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 18:59:28.833863 1077216 ssh_runner.go:195] Run: systemctl --version
	I0717 18:59:28.840631 1077216 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:59:29.583246 1077216 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:59:29.589450 1077216 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:59:29.589541 1077216 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:59:29.605066 1077216 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:59:29.605104 1077216 start.go:469] detecting cgroup driver to use...
	I0717 18:59:29.605178 1077216 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:59:29.625359 1077216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:59:29.640191 1077216 docker.go:196] disabling cri-docker service (if available) ...
	I0717 18:59:29.640269 1077216 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:59:29.655298 1077216 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:59:29.670945 1077216 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:59:29.792898 1077216 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:59:29.921671 1077216 docker.go:212] disabling docker service ...
	I0717 18:59:29.921755 1077216 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:59:29.937456 1077216 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:59:29.950830 1077216 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:59:30.073831 1077216 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:59:30.197912 1077216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:59:30.212593 1077216 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:59:30.231902 1077216 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 18:59:30.231982 1077216 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:59:30.243868 1077216 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:59:30.243957 1077216 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:59:30.255264 1077216 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:59:30.267393 1077216 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:59:30.279782 1077216 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:59:30.292852 1077216 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:59:30.303470 1077216 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:59:30.303551 1077216 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:59:30.319045 1077216 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:59:30.328672 1077216 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:59:30.446461 1077216 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:59:30.633977 1077216 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:59:30.634077 1077216 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:59:30.640087 1077216 start.go:537] Will wait 60s for crictl version
	I0717 18:59:30.640173 1077216 ssh_runner.go:195] Run: which crictl
	I0717 18:59:30.644276 1077216 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:59:30.679138 1077216 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 18:59:30.679237 1077216 ssh_runner.go:195] Run: crio --version
	I0717 18:59:30.723865 1077216 ssh_runner.go:195] Run: crio --version
	I0717 18:59:30.780312 1077216 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0717 18:59:30.782212 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetIP
	I0717 18:59:30.784913 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:30.785300 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 18:59:30.785334 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 18:59:30.785589 1077216 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:59:30.790386 1077216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:59:30.803126 1077216 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 18:59:30.803232 1077216 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:59:30.834154 1077216 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0717 18:59:30.834240 1077216 ssh_runner.go:195] Run: which lz4
	I0717 18:59:30.838669 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 18:59:30.838764 1077216 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:59:30.843238 1077216 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:59:30.843283 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0717 18:59:32.762516 1077216 crio.go:444] Took 1.923771 seconds to copy over tarball
	I0717 18:59:32.762613 1077216 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:59:36.238525 1077216 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.475851834s)
	I0717 18:59:36.238563 1077216 crio.go:451] Took 3.476009 seconds to extract the tarball
	I0717 18:59:36.238574 1077216 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:59:36.285634 1077216 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:59:36.341219 1077216 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0717 18:59:36.341260 1077216 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:59:36.341354 1077216 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:59:36.341375 1077216 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 18:59:36.341411 1077216 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 18:59:36.341419 1077216 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 18:59:36.341431 1077216 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 18:59:36.341542 1077216 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 18:59:36.341630 1077216 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 18:59:36.341646 1077216 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 18:59:36.342619 1077216 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 18:59:36.342632 1077216 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 18:59:36.342645 1077216 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 18:59:36.342620 1077216 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:59:36.342675 1077216 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 18:59:36.342702 1077216 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 18:59:36.342624 1077216 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 18:59:36.342699 1077216 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 18:59:36.505875 1077216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0717 18:59:36.511635 1077216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0717 18:59:36.513876 1077216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0717 18:59:36.520571 1077216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 18:59:36.520597 1077216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0717 18:59:36.527315 1077216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0717 18:59:36.540304 1077216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 18:59:36.635998 1077216 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0717 18:59:36.636058 1077216 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0717 18:59:36.636124 1077216 ssh_runner.go:195] Run: which crictl
	I0717 18:59:36.638822 1077216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:59:36.655099 1077216 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0717 18:59:36.655149 1077216 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 18:59:36.655214 1077216 ssh_runner.go:195] Run: which crictl
	I0717 18:59:36.711741 1077216 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0717 18:59:36.711797 1077216 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0717 18:59:36.711863 1077216 ssh_runner.go:195] Run: which crictl
	I0717 18:59:36.731617 1077216 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0717 18:59:36.731670 1077216 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 18:59:36.731668 1077216 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0717 18:59:36.731713 1077216 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 18:59:36.731760 1077216 ssh_runner.go:195] Run: which crictl
	I0717 18:59:36.731761 1077216 ssh_runner.go:195] Run: which crictl
	I0717 18:59:36.746897 1077216 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0717 18:59:36.746948 1077216 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 18:59:36.747002 1077216 ssh_runner.go:195] Run: which crictl
	I0717 18:59:36.758117 1077216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0717 18:59:36.758157 1077216 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 18:59:36.758201 1077216 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 18:59:36.758246 1077216 ssh_runner.go:195] Run: which crictl
	I0717 18:59:36.869204 1077216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0717 18:59:36.869253 1077216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0717 18:59:36.869308 1077216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 18:59:36.869375 1077216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0717 18:59:36.869397 1077216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0717 18:59:36.869474 1077216 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0717 18:59:36.869525 1077216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 18:59:36.959225 1077216 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0717 18:59:36.983733 1077216 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0717 18:59:36.983818 1077216 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 18:59:36.983848 1077216 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0717 18:59:36.983895 1077216 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0717 18:59:36.983967 1077216 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0717 18:59:36.984010 1077216 cache_images.go:92] LoadImages completed in 642.734158ms
	W0717 18:59:36.984111 1077216 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0717 18:59:36.984183 1077216 ssh_runner.go:195] Run: crio config
	I0717 18:59:37.043622 1077216 cni.go:84] Creating CNI manager for ""
	I0717 18:59:37.043647 1077216 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:59:37.043673 1077216 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 18:59:37.043695 1077216 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.20 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-946642 NodeName:ingress-addon-legacy-946642 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.20 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 18:59:37.043879 1077216 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.20
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-946642"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.20
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.20"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:59:37.044094 1077216 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-946642 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.20
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-946642 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 18:59:37.044178 1077216 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0717 18:59:37.054474 1077216 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:59:37.054583 1077216 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:59:37.064347 1077216 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0717 18:59:37.082158 1077216 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0717 18:59:37.100174 1077216 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0717 18:59:37.117950 1077216 ssh_runner.go:195] Run: grep 192.168.39.20	control-plane.minikube.internal$ /etc/hosts
	I0717 18:59:37.122753 1077216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.20	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:59:37.137678 1077216 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642 for IP: 192.168.39.20
	I0717 18:59:37.137747 1077216 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:59:37.137931 1077216 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 18:59:37.137975 1077216 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 18:59:37.138025 1077216 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.key
	I0717 18:59:37.138038 1077216 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt with IP's: []
	I0717 18:59:37.399978 1077216 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt ...
	I0717 18:59:37.400015 1077216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: {Name:mk5ab1c398c171b2731e845bfa6e9e9223715171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:59:37.400241 1077216 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.key ...
	I0717 18:59:37.400259 1077216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.key: {Name:mk8333ae30c715365622de81f2cc3df8382ac7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:59:37.400372 1077216 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.key.2e41fa34
	I0717 18:59:37.400392 1077216 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.crt.2e41fa34 with IP's: [192.168.39.20 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 18:59:37.572319 1077216 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.crt.2e41fa34 ...
	I0717 18:59:37.572359 1077216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.crt.2e41fa34: {Name:mk9b32500bdcaff55c0bb817d1baa49c9c0a569b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:59:37.572594 1077216 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.key.2e41fa34 ...
	I0717 18:59:37.572618 1077216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.key.2e41fa34: {Name:mkb2061d188b498d439a6fae1445951189f1cf72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:59:37.572737 1077216 certs.go:337] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.crt.2e41fa34 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.crt
	I0717 18:59:37.572897 1077216 certs.go:341] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.key.2e41fa34 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.key
	I0717 18:59:37.572955 1077216 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/proxy-client.key
	I0717 18:59:37.572978 1077216 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/proxy-client.crt with IP's: []
	I0717 18:59:37.682428 1077216 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/proxy-client.crt ...
	I0717 18:59:37.682473 1077216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/proxy-client.crt: {Name:mk79a912bb34419a5bffbda5915a36e89670efa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:59:37.682701 1077216 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/proxy-client.key ...
	I0717 18:59:37.682722 1077216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/proxy-client.key: {Name:mk2fe4667bab665434d408e9aff5133b13d6cdab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:59:37.682844 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:59:37.682868 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:59:37.682887 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:59:37.682903 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:59:37.682922 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:59:37.682937 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:59:37.682952 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:59:37.682972 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:59:37.683032 1077216 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 18:59:37.683086 1077216 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 18:59:37.683099 1077216 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:59:37.683130 1077216 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:59:37.683156 1077216 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:59:37.683191 1077216 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 18:59:37.683234 1077216 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 18:59:37.683267 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> /usr/share/ca-certificates/10689542.pem
	I0717 18:59:37.683284 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:59:37.683298 1077216 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem -> /usr/share/ca-certificates/1068954.pem
	I0717 18:59:37.683977 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 18:59:37.710119 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 18:59:37.736045 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:59:37.763142 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:59:37.790292 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:59:37.819429 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 18:59:37.848610 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:59:37.876357 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:59:37.903981 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 18:59:37.932125 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:59:37.957829 1077216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 18:59:37.982077 1077216 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:59:38.000245 1077216 ssh_runner.go:195] Run: openssl version
	I0717 18:59:38.006550 1077216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 18:59:38.016883 1077216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 18:59:38.022556 1077216 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 18:59:38.022622 1077216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 18:59:38.028431 1077216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:59:38.038609 1077216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:59:38.048636 1077216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:59:38.053900 1077216 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:59:38.053964 1077216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:59:38.060295 1077216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:59:38.070539 1077216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 18:59:38.080581 1077216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 18:59:38.085523 1077216 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 18:59:38.085618 1077216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 18:59:38.091785 1077216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 18:59:38.102105 1077216 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 18:59:38.106728 1077216 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 18:59:38.106791 1077216 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-946642 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-
946642 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 18:59:38.106894 1077216 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:59:38.106970 1077216 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:59:38.142481 1077216 cri.go:89] found id: ""
	I0717 18:59:38.142591 1077216 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:59:38.152042 1077216 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:59:38.161723 1077216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:59:38.170764 1077216 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:59:38.170842 1077216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0717 18:59:38.224827 1077216 kubeadm.go:322] W0717 18:59:38.216738     964 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0717 18:59:38.359943 1077216 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:59:41.570151 1077216 kubeadm.go:322] W0717 18:59:41.564149     964 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 18:59:41.579231 1077216 kubeadm.go:322] W0717 18:59:41.573318     964 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 18:59:52.278913 1077216 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0717 18:59:52.278978 1077216 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 18:59:52.279052 1077216 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:59:52.279134 1077216 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:59:52.279226 1077216 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:59:52.279320 1077216 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:59:52.279478 1077216 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:59:52.279545 1077216 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 18:59:52.279624 1077216 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:59:52.281636 1077216 out.go:204]   - Generating certificates and keys ...
	I0717 18:59:52.281728 1077216 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 18:59:52.281801 1077216 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 18:59:52.281859 1077216 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:59:52.281914 1077216 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:59:52.281964 1077216 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:59:52.282011 1077216 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 18:59:52.282088 1077216 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 18:59:52.282294 1077216 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-946642 localhost] and IPs [192.168.39.20 127.0.0.1 ::1]
	I0717 18:59:52.282374 1077216 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 18:59:52.282509 1077216 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-946642 localhost] and IPs [192.168.39.20 127.0.0.1 ::1]
	I0717 18:59:52.282607 1077216 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:59:52.282669 1077216 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:59:52.282717 1077216 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 18:59:52.282802 1077216 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:59:52.282859 1077216 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:59:52.282904 1077216 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:59:52.282961 1077216 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:59:52.283029 1077216 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:59:52.283111 1077216 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:59:52.285322 1077216 out.go:204]   - Booting up control plane ...
	I0717 18:59:52.285451 1077216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:59:52.285573 1077216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:59:52.285688 1077216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:59:52.285797 1077216 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:59:52.285993 1077216 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:59:52.286071 1077216 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003051 seconds
	I0717 18:59:52.286192 1077216 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:59:52.286362 1077216 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:59:52.286461 1077216 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:59:52.286625 1077216 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-946642 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 18:59:52.286711 1077216 kubeadm.go:322] [bootstrap-token] Using token: wf9s0c.r8xjibp9p223qtu1
	I0717 18:59:52.288674 1077216 out.go:204]   - Configuring RBAC rules ...
	I0717 18:59:52.288828 1077216 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:59:52.288932 1077216 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:59:52.289082 1077216 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:59:52.289224 1077216 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:59:52.289356 1077216 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:59:52.289460 1077216 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:59:52.289658 1077216 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:59:52.289730 1077216 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 18:59:52.289816 1077216 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 18:59:52.289834 1077216 kubeadm.go:322] 
	I0717 18:59:52.289886 1077216 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 18:59:52.289892 1077216 kubeadm.go:322] 
	I0717 18:59:52.290000 1077216 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 18:59:52.290020 1077216 kubeadm.go:322] 
	I0717 18:59:52.290049 1077216 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 18:59:52.290132 1077216 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:59:52.290215 1077216 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:59:52.290229 1077216 kubeadm.go:322] 
	I0717 18:59:52.290301 1077216 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 18:59:52.290430 1077216 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:59:52.290527 1077216 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:59:52.290536 1077216 kubeadm.go:322] 
	I0717 18:59:52.290646 1077216 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:59:52.290764 1077216 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 18:59:52.290778 1077216 kubeadm.go:322] 
	I0717 18:59:52.290877 1077216 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wf9s0c.r8xjibp9p223qtu1 \
	I0717 18:59:52.291009 1077216 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 18:59:52.291051 1077216 kubeadm.go:322]     --control-plane 
	I0717 18:59:52.291061 1077216 kubeadm.go:322] 
	I0717 18:59:52.291159 1077216 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:59:52.291167 1077216 kubeadm.go:322] 
	I0717 18:59:52.291284 1077216 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wf9s0c.r8xjibp9p223qtu1 \
	I0717 18:59:52.291445 1077216 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 18:59:52.291475 1077216 cni.go:84] Creating CNI manager for ""
	I0717 18:59:52.291492 1077216 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:59:52.293718 1077216 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:59:52.295562 1077216 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:59:52.309743 1077216 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 18:59:52.331904 1077216 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:59:52.332017 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=ingress-addon-legacy-946642 minikube.k8s.io/updated_at=2023_07_17T18_59_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:52.332041 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:52.559803 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:52.655622 1077216 ops.go:34] apiserver oom_adj: -16
	I0717 18:59:53.302114 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:53.802250 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:54.301494 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:54.801772 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:55.301443 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:55.801460 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:56.302027 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:56.802134 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:57.302130 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:57.802278 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:58.301623 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:58.801658 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:59.301590 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:59:59.801579 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:00.301502 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:00.802282 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:01.301661 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:01.801538 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:02.301741 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:02.801722 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:03.301946 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:03.802233 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:04.302063 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:04.802370 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:05.301588 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:05.802034 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:06.302414 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:06.802011 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:07.301664 1077216 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:00:07.513590 1077216 kubeadm.go:1081] duration metric: took 15.181652698s to wait for elevateKubeSystemPrivileges.
	I0717 19:00:07.513642 1077216 kubeadm.go:406] StartCluster complete in 29.406857387s
	I0717 19:00:07.513672 1077216 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:00:07.513789 1077216 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:00:07.514951 1077216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:00:07.515298 1077216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:00:07.515541 1077216 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:00:07.515669 1077216 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-946642"
	I0717 19:00:07.515688 1077216 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-946642"
	I0717 19:00:07.515637 1077216 config.go:182] Loaded profile config "ingress-addon-legacy-946642": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0717 19:00:07.515781 1077216 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-946642"
	I0717 19:00:07.515811 1077216 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-946642"
	I0717 19:00:07.515755 1077216 host.go:66] Checking if "ingress-addon-legacy-946642" exists ...
	I0717 19:00:07.516088 1077216 kapi.go:59] client config for ingress-addon-legacy-946642: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:
[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:00:07.516310 1077216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:00:07.516361 1077216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:00:07.516381 1077216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:00:07.516422 1077216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:00:07.517301 1077216 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 19:00:07.534905 1077216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38855
	I0717 19:00:07.535517 1077216 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:00:07.536268 1077216 main.go:141] libmachine: Using API Version  1
	I0717 19:00:07.536304 1077216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:00:07.536783 1077216 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:00:07.537443 1077216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:00:07.537505 1077216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:00:07.538391 1077216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44195
	I0717 19:00:07.538885 1077216 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:00:07.539543 1077216 main.go:141] libmachine: Using API Version  1
	I0717 19:00:07.539569 1077216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:00:07.539970 1077216 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:00:07.540198 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetState
	I0717 19:00:07.544174 1077216 kapi.go:59] client config for ingress-addon-legacy-946642: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:
[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:00:07.554714 1077216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0717 19:00:07.555229 1077216 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:00:07.555810 1077216 main.go:141] libmachine: Using API Version  1
	I0717 19:00:07.555844 1077216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:00:07.556258 1077216 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:00:07.556479 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetState
	I0717 19:00:07.558345 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .DriverName
	I0717 19:00:07.561202 1077216 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:00:07.563249 1077216 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:00:07.563275 1077216 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:00:07.563304 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHHostname
	I0717 19:00:07.567112 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 19:00:07.567733 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 19:00:07.567790 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 19:00:07.568094 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHPort
	I0717 19:00:07.568388 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 19:00:07.568592 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHUsername
	I0717 19:00:07.568782 1077216 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642/id_rsa Username:docker}
	I0717 19:00:07.606704 1077216 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-946642"
	I0717 19:00:07.606769 1077216 host.go:66] Checking if "ingress-addon-legacy-946642" exists ...
	I0717 19:00:07.607220 1077216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:00:07.607275 1077216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:00:07.622547 1077216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I0717 19:00:07.623105 1077216 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:00:07.623719 1077216 main.go:141] libmachine: Using API Version  1
	I0717 19:00:07.623750 1077216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:00:07.624133 1077216 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:00:07.624870 1077216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:00:07.624929 1077216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:00:07.641173 1077216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34969
	I0717 19:00:07.641762 1077216 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:00:07.642315 1077216 main.go:141] libmachine: Using API Version  1
	I0717 19:00:07.642346 1077216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:00:07.642736 1077216 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:00:07.643010 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetState
	I0717 19:00:07.644889 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .DriverName
	I0717 19:00:07.645219 1077216 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:00:07.645254 1077216 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:00:07.645279 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHHostname
	I0717 19:00:07.648327 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 19:00:07.648817 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:dd:41", ip: ""} in network mk-ingress-addon-legacy-946642: {Iface:virbr1 ExpiryTime:2023-07-17 19:59:20 +0000 UTC Type:0 Mac:52:54:00:63:dd:41 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ingress-addon-legacy-946642 Clientid:01:52:54:00:63:dd:41}
	I0717 19:00:07.648854 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | domain ingress-addon-legacy-946642 has defined IP address 192.168.39.20 and MAC address 52:54:00:63:dd:41 in network mk-ingress-addon-legacy-946642
	I0717 19:00:07.649008 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHPort
	I0717 19:00:07.649198 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHKeyPath
	I0717 19:00:07.649367 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .GetSSHUsername
	I0717 19:00:07.649519 1077216 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/ingress-addon-legacy-946642/id_rsa Username:docker}
	W0717 19:00:07.656938 1077216 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-946642" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0717 19:00:07.656980 1077216 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0717 19:00:07.657053 1077216 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:00:07.660923 1077216 out.go:177] * Verifying Kubernetes components...
	I0717 19:00:07.663052 1077216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:00:07.788809 1077216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 19:00:07.789617 1077216 kapi.go:59] client config for ingress-addon-legacy-946642: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:
[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:00:07.790078 1077216 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-946642" to be "Ready" ...
	I0717 19:00:07.794832 1077216 node_ready.go:49] node "ingress-addon-legacy-946642" has status "Ready":"True"
	I0717 19:00:07.794862 1077216 node_ready.go:38] duration metric: took 4.742003ms waiting for node "ingress-addon-legacy-946642" to be "Ready" ...
	I0717 19:00:07.794880 1077216 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:00:07.810667 1077216 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:08.024410 1077216 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:00:08.025258 1077216 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:00:08.929368 1077216 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.140499267s)
	I0717 19:00:08.929419 1077216 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 19:00:09.085148 1077216 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.060697764s)
	I0717 19:00:09.085204 1077216 main.go:141] libmachine: Making call to close driver server
	I0717 19:00:09.085220 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .Close
	I0717 19:00:09.085243 1077216 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.059956094s)
	I0717 19:00:09.085285 1077216 main.go:141] libmachine: Making call to close driver server
	I0717 19:00:09.085311 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .Close
	I0717 19:00:09.085705 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Closing plugin on server side
	I0717 19:00:09.085705 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Closing plugin on server side
	I0717 19:00:09.085715 1077216 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:00:09.085736 1077216 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:00:09.085739 1077216 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:00:09.085757 1077216 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:00:09.085768 1077216 main.go:141] libmachine: Making call to close driver server
	I0717 19:00:09.085778 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .Close
	I0717 19:00:09.085746 1077216 main.go:141] libmachine: Making call to close driver server
	I0717 19:00:09.085822 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .Close
	I0717 19:00:09.086074 1077216 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:00:09.086088 1077216 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:00:09.086107 1077216 main.go:141] libmachine: Making call to close driver server
	I0717 19:00:09.086127 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) Calling .Close
	I0717 19:00:09.086361 1077216 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:00:09.086371 1077216 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:00:09.087606 1077216 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:00:09.087621 1077216 main.go:141] libmachine: (ingress-addon-legacy-946642) DBG | Closing plugin on server side
	I0717 19:00:09.087628 1077216 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:00:09.089679 1077216 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0717 19:00:09.091767 1077216 addons.go:502] enable addons completed in 1.576229283s: enabled=[default-storageclass storage-provisioner]
	I0717 19:00:09.830792 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:12.327551 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:14.328046 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:16.828120 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:19.327202 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:21.328852 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:23.827222 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:26.327564 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:28.827699 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:30.829869 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:32.832585 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:35.327182 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:37.328028 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:39.827958 1077216 pod_ready.go:102] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:42.328459 1077216 pod_ready.go:92] pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace has status "Ready":"True"
	I0717 19:00:42.328500 1077216 pod_ready.go:81] duration metric: took 34.51780113s waiting for pod "coredns-66bff467f8-jtbmr" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:42.328511 1077216 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-xgpjw" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:44.342997 1077216 pod_ready.go:102] pod "coredns-66bff467f8-xgpjw" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:46.343379 1077216 pod_ready.go:102] pod "coredns-66bff467f8-xgpjw" in "kube-system" namespace has status "Ready":"False"
	I0717 19:00:47.843549 1077216 pod_ready.go:92] pod "coredns-66bff467f8-xgpjw" in "kube-system" namespace has status "Ready":"True"
	I0717 19:00:47.843582 1077216 pod_ready.go:81] duration metric: took 5.515065027s waiting for pod "coredns-66bff467f8-xgpjw" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:47.843601 1077216 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-946642" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:47.858738 1077216 pod_ready.go:92] pod "etcd-ingress-addon-legacy-946642" in "kube-system" namespace has status "Ready":"True"
	I0717 19:00:47.858766 1077216 pod_ready.go:81] duration metric: took 15.158055ms waiting for pod "etcd-ingress-addon-legacy-946642" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:47.858778 1077216 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-946642" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:47.865377 1077216 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-946642" in "kube-system" namespace has status "Ready":"True"
	I0717 19:00:47.865403 1077216 pod_ready.go:81] duration metric: took 6.618703ms waiting for pod "kube-apiserver-ingress-addon-legacy-946642" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:47.865415 1077216 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-946642" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:47.873301 1077216 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-946642" in "kube-system" namespace has status "Ready":"True"
	I0717 19:00:47.873326 1077216 pod_ready.go:81] duration metric: took 7.904649ms waiting for pod "kube-controller-manager-ingress-addon-legacy-946642" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:47.873337 1077216 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sqhr8" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:47.881697 1077216 pod_ready.go:92] pod "kube-proxy-sqhr8" in "kube-system" namespace has status "Ready":"True"
	I0717 19:00:47.881733 1077216 pod_ready.go:81] duration metric: took 8.388407ms waiting for pod "kube-proxy-sqhr8" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:47.881748 1077216 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-946642" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:48.035257 1077216 request.go:628] Waited for 153.412087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-946642
	I0717 19:00:48.234842 1077216 request.go:628] Waited for 195.202475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/ingress-addon-legacy-946642
	I0717 19:00:48.238437 1077216 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-946642" in "kube-system" namespace has status "Ready":"True"
	I0717 19:00:48.238473 1077216 pod_ready.go:81] duration metric: took 356.716221ms waiting for pod "kube-scheduler-ingress-addon-legacy-946642" in "kube-system" namespace to be "Ready" ...
	I0717 19:00:48.238485 1077216 pod_ready.go:38] duration metric: took 40.443592471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:00:48.238519 1077216 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:00:48.238646 1077216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:00:48.257278 1077216 api_server.go:72] duration metric: took 40.600175992s to wait for apiserver process to appear ...
	I0717 19:00:48.257318 1077216 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:00:48.257344 1077216 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I0717 19:00:48.269492 1077216 api_server.go:279] https://192.168.39.20:8443/healthz returned 200:
	ok
	I0717 19:00:48.270964 1077216 api_server.go:141] control plane version: v1.18.20
	I0717 19:00:48.270992 1077216 api_server.go:131] duration metric: took 13.666653ms to wait for apiserver health ...
	I0717 19:00:48.271001 1077216 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:00:48.435512 1077216 request.go:628] Waited for 164.405854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0717 19:00:48.443536 1077216 system_pods.go:59] 8 kube-system pods found
	I0717 19:00:48.443573 1077216 system_pods.go:61] "coredns-66bff467f8-jtbmr" [ec868a37-f17a-4a66-bf44-7b6f3d009f29] Running
	I0717 19:00:48.443578 1077216 system_pods.go:61] "coredns-66bff467f8-xgpjw" [ea5c31f8-7623-44bf-b725-2261362173dd] Running
	I0717 19:00:48.443586 1077216 system_pods.go:61] "etcd-ingress-addon-legacy-946642" [21b4edec-0992-43da-9255-902a8848a9b4] Running
	I0717 19:00:48.443590 1077216 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-946642" [9062d2d5-5aa2-4c4e-8b72-0a59f7f8ca36] Running
	I0717 19:00:48.443595 1077216 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-946642" [e35f1df6-b50b-4631-b5c8-d6bec7fc918b] Running
	I0717 19:00:48.443599 1077216 system_pods.go:61] "kube-proxy-sqhr8" [60604053-13c2-41dd-afc0-e7c6fe02c564] Running
	I0717 19:00:48.443603 1077216 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-946642" [3c496b7b-7d4e-4709-ad26-47777ebe6fb9] Running
	I0717 19:00:48.443607 1077216 system_pods.go:61] "storage-provisioner" [dd620fe3-9f2f-453b-84b4-764f4f61ca5c] Running
	I0717 19:00:48.443613 1077216 system_pods.go:74] duration metric: took 172.607307ms to wait for pod list to return data ...
	I0717 19:00:48.443622 1077216 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:00:48.635896 1077216 request.go:628] Waited for 192.151589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/default/serviceaccounts
	I0717 19:00:48.639346 1077216 default_sa.go:45] found service account: "default"
	I0717 19:00:48.639383 1077216 default_sa.go:55] duration metric: took 195.755352ms for default service account to be created ...
	I0717 19:00:48.639393 1077216 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:00:48.834832 1077216 request.go:628] Waited for 195.360346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0717 19:00:48.841911 1077216 system_pods.go:86] 8 kube-system pods found
	I0717 19:00:48.841945 1077216 system_pods.go:89] "coredns-66bff467f8-jtbmr" [ec868a37-f17a-4a66-bf44-7b6f3d009f29] Running
	I0717 19:00:48.841951 1077216 system_pods.go:89] "coredns-66bff467f8-xgpjw" [ea5c31f8-7623-44bf-b725-2261362173dd] Running
	I0717 19:00:48.841955 1077216 system_pods.go:89] "etcd-ingress-addon-legacy-946642" [21b4edec-0992-43da-9255-902a8848a9b4] Running
	I0717 19:00:48.841961 1077216 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-946642" [9062d2d5-5aa2-4c4e-8b72-0a59f7f8ca36] Running
	I0717 19:00:48.841965 1077216 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-946642" [e35f1df6-b50b-4631-b5c8-d6bec7fc918b] Running
	I0717 19:00:48.841969 1077216 system_pods.go:89] "kube-proxy-sqhr8" [60604053-13c2-41dd-afc0-e7c6fe02c564] Running
	I0717 19:00:48.841973 1077216 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-946642" [3c496b7b-7d4e-4709-ad26-47777ebe6fb9] Running
	I0717 19:00:48.841977 1077216 system_pods.go:89] "storage-provisioner" [dd620fe3-9f2f-453b-84b4-764f4f61ca5c] Running
	I0717 19:00:48.841984 1077216 system_pods.go:126] duration metric: took 202.586499ms to wait for k8s-apps to be running ...
	I0717 19:00:48.841992 1077216 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:00:48.842041 1077216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:00:48.857977 1077216 system_svc.go:56] duration metric: took 15.972648ms WaitForService to wait for kubelet.
	I0717 19:00:48.858028 1077216 kubeadm.go:581] duration metric: took 41.200923418s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 19:00:48.858091 1077216 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:00:49.035733 1077216 request.go:628] Waited for 177.437232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes
	I0717 19:00:49.039564 1077216 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:00:49.039623 1077216 node_conditions.go:123] node cpu capacity is 2
	I0717 19:00:49.039637 1077216 node_conditions.go:105] duration metric: took 181.540963ms to run NodePressure ...
	I0717 19:00:49.039649 1077216 start.go:228] waiting for startup goroutines ...
	I0717 19:00:49.039655 1077216 start.go:233] waiting for cluster config update ...
	I0717 19:00:49.039667 1077216 start.go:242] writing updated cluster config ...
	I0717 19:00:49.039961 1077216 ssh_runner.go:195] Run: rm -f paused
	I0717 19:00:49.093748 1077216 start.go:578] kubectl: 1.27.3, cluster: 1.18.20 (minor skew: 9)
	I0717 19:00:49.096491 1077216 out.go:177] 
	W0717 19:00:49.098557 1077216 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.18.20.
	I0717 19:00:49.100559 1077216 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0717 19:00:49.102620 1077216 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-946642" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 18:59:16 UTC, ends at Mon 2023-07-17 19:03:52 UTC. --
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.598093619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d2c3d1a6e05f5050053c10fb9be0783e8aaa2d1857b472d4e26102e61126982,PodSandboxId:06530ad76b0cd7076d601410beca1f85f367e3f5f2056c5726479f7f93a95c80,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689620621679144099,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-tq4hc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08481f14-f670-4088-ad2f-44dfc71ffa1e,},Annotations:map[string]string{io.kubernetes.container.hash: 1edfe5cd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02145f263c40ac364530053f693689b4353315e2071f17ffb4892bb81d7be79b,PodSandboxId:d202c8dd3176c8a7722dfd85faf3a2ec0e1ae251bef45eaeac1cb1b02423df4a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689620481301417542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c50750ac-5831-4d65-94e3-c90c66a1eaae,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 10617d09,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00214624c3b5721ec913981f76a4f3d64fe73f6f6abc64af638e16b3238a229a,PodSandboxId:82c1f363eb0c5e07a1b0e9c9a9b180edbb3688018c7f90aa4bd8dc86b92780e0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689620461765731828,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6p4x6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2d8c75d9-4f0c-4089-b914-b962d27b872a,},Annotations:map[string]string{io.kubernetes.container.hash: 92da1f6a,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:352382e8bc7fc45d89a9e6284bd5941d06f78167666204c3e8978c5d6023da69,PodSandboxId:aec47474b4fd87cb391949d1b2c4bfaed1e844ac4129afda6313a51cf0b499f1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452504147936,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q7mkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4dc48604-2d11-4714-95bf-3c4102c37863,},Annotations:map[string]string{io.kubernetes.container.hash: 30f4c6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8b4e235c96b60560b65ec040d751fc9849b1690f0350675632db1084b911b3,PodSandboxId:eb29123bf5b0434fc52937eed9d010c4f32be45fa9b78553e324b133bef274c7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452333632934,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nrnbd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 18a1bea5-ae48-41a1-ad08-c974f450856f,},Annotations:map[string]string{io.kubernetes.container.hash: 720542bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac4ab34b6f8547ae47be71931004bee53e3f8a024d5f41b95781a6c20ac1640,PodSandboxId:c39575fb72b2428c6bb97332c691615fbf60a8ebbccbbd97d654a5d3e4a7bcc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689620409865556094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd620fe3-9f2f-453b-84b4-764f4f61ca5c,},Annotations:map[string]string{io.kubernetes.container.hash: 734f5e2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78743d1ce2adb97cac01af8bc0dd05b9b1912d19ce27d7de72be760af296784a,PodSandboxId:e926497311c1cbdd9c82b9829f46d6c8e4474ca1b083853f057320bdd7f40c3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689620409301068867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqhr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60604053-13c2-41dd-afc0-e7c6fe02c564,},Annotations:map[string]string{io.kubernetes.container.hash: a2a589eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca8adfeba943bdeb2328c2192e2e9e21b69a34e28713c2ed3493403d86105205,PodSandboxId:d4ad6f5eecb813c5d56af60277babddd8bd298c370c8eccb0d358d8d407567b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408525587859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-jtbmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec868a37-f17a-4a66-bf44-7b6f3d009f29,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af39171495917947445c243ee1154b693137f097752ea143aeda966ea8a85ae,Pod
SandboxId:fb34f49c189449709b428086df82dcf9afcb40babba4d51ca0cc39de73d53ffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408553521700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xgpjw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5c31f8-7623-44bf-b725-2261362173dd,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702e5660b3b165fba93073e44476a48885831ed2cc7facc23c95cca342ef5113,PodSandboxId:097d1750f3cd63914d78e1135c7b423494c5a2c60a58a28e17e29ba20cd1f51f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689620384814484441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4020d4221a809345951a8526bf5735a7,},Annotations:map[string]string{io.kubernetes.container.hash: d4c104a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f78582daaa86aa1b9a47c6333087ee63511f484d092a14b675686e483b4fc60,PodSandboxId:d83bf5c9b4cf4e86a05aec9f7056590cbc81da2df97f84e301e286ef56347bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689620383470974942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf81fc6cf537aca2df03dfe654e45f8a2e8bccba0290de6abc7b4a92e22708ac,PodSandboxId:a877732ef74def8a8ce3e8dca128802237195ac51a7bfbf0c17383dbb239950f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689620383352343836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113be5574875379c132a166f334b77452623787af9883412b43730bc79476120,PodSandboxId:a8da5d63bd2d57b928c0ba7c4a504d6848be0e35e93d36f60268c32cadc80dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689620383170394841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 311d51cbd29d6849e1eae0e8be2b5e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 54bc8ddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1cbb781a-d15f-44cb-9523-7fe731fb48c5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.788663515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=023d61e8-369a-45c9-be18-6cbe4de80e5b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.788828342Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=023d61e8-369a-45c9-be18-6cbe4de80e5b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.789283286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d2c3d1a6e05f5050053c10fb9be0783e8aaa2d1857b472d4e26102e61126982,PodSandboxId:06530ad76b0cd7076d601410beca1f85f367e3f5f2056c5726479f7f93a95c80,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689620621679144099,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-tq4hc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08481f14-f670-4088-ad2f-44dfc71ffa1e,},Annotations:map[string]string{io.kubernetes.container.hash: 1edfe5cd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02145f263c40ac364530053f693689b4353315e2071f17ffb4892bb81d7be79b,PodSandboxId:d202c8dd3176c8a7722dfd85faf3a2ec0e1ae251bef45eaeac1cb1b02423df4a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689620481301417542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c50750ac-5831-4d65-94e3-c90c66a1eaae,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 10617d09,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00214624c3b5721ec913981f76a4f3d64fe73f6f6abc64af638e16b3238a229a,PodSandboxId:82c1f363eb0c5e07a1b0e9c9a9b180edbb3688018c7f90aa4bd8dc86b92780e0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689620461765731828,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6p4x6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2d8c75d9-4f0c-4089-b914-b962d27b872a,},Annotations:map[string]string{io.kubernetes.container.hash: 92da1f6a,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:352382e8bc7fc45d89a9e6284bd5941d06f78167666204c3e8978c5d6023da69,PodSandboxId:aec47474b4fd87cb391949d1b2c4bfaed1e844ac4129afda6313a51cf0b499f1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452504147936,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q7mkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4dc48604-2d11-4714-95bf-3c4102c37863,},Annotations:map[string]string{io.kubernetes.container.hash: 30f4c6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8b4e235c96b60560b65ec040d751fc9849b1690f0350675632db1084b911b3,PodSandboxId:eb29123bf5b0434fc52937eed9d010c4f32be45fa9b78553e324b133bef274c7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452333632934,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nrnbd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 18a1bea5-ae48-41a1-ad08-c974f450856f,},Annotations:map[string]string{io.kubernetes.container.hash: 720542bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac4ab34b6f8547ae47be71931004bee53e3f8a024d5f41b95781a6c20ac1640,PodSandboxId:c39575fb72b2428c6bb97332c691615fbf60a8ebbccbbd97d654a5d3e4a7bcc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689620409865556094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd620fe3-9f2f-453b-84b4-764f4f61ca5c,},Annotations:map[string]string{io.kubernetes.container.hash: 734f5e2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78743d1ce2adb97cac01af8bc0dd05b9b1912d19ce27d7de72be760af296784a,PodSandboxId:e926497311c1cbdd9c82b9829f46d6c8e4474ca1b083853f057320bdd7f40c3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689620409301068867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqhr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60604053-13c2-41dd-afc0-e7c6fe02c564,},Annotations:map[string]string{io.kubernetes.container.hash: a2a589eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca8adfeba943bdeb2328c2192e2e9e21b69a34e28713c2ed3493403d86105205,PodSandboxId:d4ad6f5eecb813c5d56af60277babddd8bd298c370c8eccb0d358d8d407567b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408525587859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-jtbmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec868a37-f17a-4a66-bf44-7b6f3d009f29,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af39171495917947445c243ee1154b693137f097752ea143aeda966ea8a85ae,Pod
SandboxId:fb34f49c189449709b428086df82dcf9afcb40babba4d51ca0cc39de73d53ffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408553521700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xgpjw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5c31f8-7623-44bf-b725-2261362173dd,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702e5660b3b165fba93073e44476a48885831ed2cc7facc23c95cca342ef5113,PodSandboxId:097d1750f3cd63914d78e1135c7b423494c5a2c60a58a28e17e29ba20cd1f51f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689620384814484441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4020d4221a809345951a8526bf5735a7,},Annotations:map[string]string{io.kubernetes.container.hash: d4c104a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f78582daaa86aa1b9a47c6333087ee63511f484d092a14b675686e483b4fc60,PodSandboxId:d83bf5c9b4cf4e86a05aec9f7056590cbc81da2df97f84e301e286ef56347bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689620383470974942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf81fc6cf537aca2df03dfe654e45f8a2e8bccba0290de6abc7b4a92e22708ac,PodSandboxId:a877732ef74def8a8ce3e8dca128802237195ac51a7bfbf0c17383dbb239950f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689620383352343836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113be5574875379c132a166f334b77452623787af9883412b43730bc79476120,PodSandboxId:a8da5d63bd2d57b928c0ba7c4a504d6848be0e35e93d36f60268c32cadc80dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689620383170394841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 311d51cbd29d6849e1eae0e8be2b5e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 54bc8ddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=023d61e8-369a-45c9-be18-6cbe4de80e5b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.826355437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c48a51c4-c3af-457a-a551-e6e2bc135bc9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.826454686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c48a51c4-c3af-457a-a551-e6e2bc135bc9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.826781968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d2c3d1a6e05f5050053c10fb9be0783e8aaa2d1857b472d4e26102e61126982,PodSandboxId:06530ad76b0cd7076d601410beca1f85f367e3f5f2056c5726479f7f93a95c80,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689620621679144099,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-tq4hc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08481f14-f670-4088-ad2f-44dfc71ffa1e,},Annotations:map[string]string{io.kubernetes.container.hash: 1edfe5cd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02145f263c40ac364530053f693689b4353315e2071f17ffb4892bb81d7be79b,PodSandboxId:d202c8dd3176c8a7722dfd85faf3a2ec0e1ae251bef45eaeac1cb1b02423df4a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689620481301417542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c50750ac-5831-4d65-94e3-c90c66a1eaae,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 10617d09,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00214624c3b5721ec913981f76a4f3d64fe73f6f6abc64af638e16b3238a229a,PodSandboxId:82c1f363eb0c5e07a1b0e9c9a9b180edbb3688018c7f90aa4bd8dc86b92780e0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689620461765731828,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6p4x6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2d8c75d9-4f0c-4089-b914-b962d27b872a,},Annotations:map[string]string{io.kubernetes.container.hash: 92da1f6a,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:352382e8bc7fc45d89a9e6284bd5941d06f78167666204c3e8978c5d6023da69,PodSandboxId:aec47474b4fd87cb391949d1b2c4bfaed1e844ac4129afda6313a51cf0b499f1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452504147936,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q7mkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4dc48604-2d11-4714-95bf-3c4102c37863,},Annotations:map[string]string{io.kubernetes.container.hash: 30f4c6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8b4e235c96b60560b65ec040d751fc9849b1690f0350675632db1084b911b3,PodSandboxId:eb29123bf5b0434fc52937eed9d010c4f32be45fa9b78553e324b133bef274c7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452333632934,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nrnbd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 18a1bea5-ae48-41a1-ad08-c974f450856f,},Annotations:map[string]string{io.kubernetes.container.hash: 720542bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac4ab34b6f8547ae47be71931004bee53e3f8a024d5f41b95781a6c20ac1640,PodSandboxId:c39575fb72b2428c6bb97332c691615fbf60a8ebbccbbd97d654a5d3e4a7bcc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689620409865556094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd620fe3-9f2f-453b-84b4-764f4f61ca5c,},Annotations:map[string]string{io.kubernetes.container.hash: 734f5e2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78743d1ce2adb97cac01af8bc0dd05b9b1912d19ce27d7de72be760af296784a,PodSandboxId:e926497311c1cbdd9c82b9829f46d6c8e4474ca1b083853f057320bdd7f40c3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689620409301068867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqhr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60604053-13c2-41dd-afc0-e7c6fe02c564,},Annotations:map[string]string{io.kubernetes.container.hash: a2a589eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca8adfeba943bdeb2328c2192e2e9e21b69a34e28713c2ed3493403d86105205,PodSandboxId:d4ad6f5eecb813c5d56af60277babddd8bd298c370c8eccb0d358d8d407567b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408525587859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-jtbmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec868a37-f17a-4a66-bf44-7b6f3d009f29,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af39171495917947445c243ee1154b693137f097752ea143aeda966ea8a85ae,Pod
SandboxId:fb34f49c189449709b428086df82dcf9afcb40babba4d51ca0cc39de73d53ffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408553521700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xgpjw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5c31f8-7623-44bf-b725-2261362173dd,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702e5660b3b165fba93073e44476a48885831ed2cc7facc23c95cca342ef5113,PodSandboxId:097d1750f3cd63914d78e1135c7b423494c5a2c60a58a28e17e29ba20cd1f51f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689620384814484441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4020d4221a809345951a8526bf5735a7,},Annotations:map[string]string{io.kubernetes.container.hash: d4c104a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f78582daaa86aa1b9a47c6333087ee63511f484d092a14b675686e483b4fc60,PodSandboxId:d83bf5c9b4cf4e86a05aec9f7056590cbc81da2df97f84e301e286ef56347bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689620383470974942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf81fc6cf537aca2df03dfe654e45f8a2e8bccba0290de6abc7b4a92e22708ac,PodSandboxId:a877732ef74def8a8ce3e8dca128802237195ac51a7bfbf0c17383dbb239950f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689620383352343836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113be5574875379c132a166f334b77452623787af9883412b43730bc79476120,PodSandboxId:a8da5d63bd2d57b928c0ba7c4a504d6848be0e35e93d36f60268c32cadc80dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689620383170394841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 311d51cbd29d6849e1eae0e8be2b5e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 54bc8ddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c48a51c4-c3af-457a-a551-e6e2bc135bc9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.868891848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0fd53056-f872-4472-bbda-87884ac93eb7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.869079886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0fd53056-f872-4472-bbda-87884ac93eb7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.869458855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d2c3d1a6e05f5050053c10fb9be0783e8aaa2d1857b472d4e26102e61126982,PodSandboxId:06530ad76b0cd7076d601410beca1f85f367e3f5f2056c5726479f7f93a95c80,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689620621679144099,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-tq4hc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08481f14-f670-4088-ad2f-44dfc71ffa1e,},Annotations:map[string]string{io.kubernetes.container.hash: 1edfe5cd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02145f263c40ac364530053f693689b4353315e2071f17ffb4892bb81d7be79b,PodSandboxId:d202c8dd3176c8a7722dfd85faf3a2ec0e1ae251bef45eaeac1cb1b02423df4a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689620481301417542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c50750ac-5831-4d65-94e3-c90c66a1eaae,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 10617d09,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00214624c3b5721ec913981f76a4f3d64fe73f6f6abc64af638e16b3238a229a,PodSandboxId:82c1f363eb0c5e07a1b0e9c9a9b180edbb3688018c7f90aa4bd8dc86b92780e0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689620461765731828,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6p4x6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2d8c75d9-4f0c-4089-b914-b962d27b872a,},Annotations:map[string]string{io.kubernetes.container.hash: 92da1f6a,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:352382e8bc7fc45d89a9e6284bd5941d06f78167666204c3e8978c5d6023da69,PodSandboxId:aec47474b4fd87cb391949d1b2c4bfaed1e844ac4129afda6313a51cf0b499f1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452504147936,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q7mkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4dc48604-2d11-4714-95bf-3c4102c37863,},Annotations:map[string]string{io.kubernetes.container.hash: 30f4c6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8b4e235c96b60560b65ec040d751fc9849b1690f0350675632db1084b911b3,PodSandboxId:eb29123bf5b0434fc52937eed9d010c4f32be45fa9b78553e324b133bef274c7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452333632934,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nrnbd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 18a1bea5-ae48-41a1-ad08-c974f450856f,},Annotations:map[string]string{io.kubernetes.container.hash: 720542bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac4ab34b6f8547ae47be71931004bee53e3f8a024d5f41b95781a6c20ac1640,PodSandboxId:c39575fb72b2428c6bb97332c691615fbf60a8ebbccbbd97d654a5d3e4a7bcc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689620409865556094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd620fe3-9f2f-453b-84b4-764f4f61ca5c,},Annotations:map[string]string{io.kubernetes.container.hash: 734f5e2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78743d1ce2adb97cac01af8bc0dd05b9b1912d19ce27d7de72be760af296784a,PodSandboxId:e926497311c1cbdd9c82b9829f46d6c8e4474ca1b083853f057320bdd7f40c3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689620409301068867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqhr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60604053-13c2-41dd-afc0-e7c6fe02c564,},Annotations:map[string]string{io.kubernetes.container.hash: a2a589eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca8adfeba943bdeb2328c2192e2e9e21b69a34e28713c2ed3493403d86105205,PodSandboxId:d4ad6f5eecb813c5d56af60277babddd8bd298c370c8eccb0d358d8d407567b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408525587859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-jtbmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec868a37-f17a-4a66-bf44-7b6f3d009f29,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af39171495917947445c243ee1154b693137f097752ea143aeda966ea8a85ae,Pod
SandboxId:fb34f49c189449709b428086df82dcf9afcb40babba4d51ca0cc39de73d53ffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408553521700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xgpjw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5c31f8-7623-44bf-b725-2261362173dd,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702e5660b3b165fba93073e44476a48885831ed2cc7facc23c95cca342ef5113,PodSandboxId:097d1750f3cd63914d78e1135c7b423494c5a2c60a58a28e17e29ba20cd1f51f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689620384814484441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4020d4221a809345951a8526bf5735a7,},Annotations:map[string]string{io.kubernetes.container.hash: d4c104a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f78582daaa86aa1b9a47c6333087ee63511f484d092a14b675686e483b4fc60,PodSandboxId:d83bf5c9b4cf4e86a05aec9f7056590cbc81da2df97f84e301e286ef56347bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689620383470974942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf81fc6cf537aca2df03dfe654e45f8a2e8bccba0290de6abc7b4a92e22708ac,PodSandboxId:a877732ef74def8a8ce3e8dca128802237195ac51a7bfbf0c17383dbb239950f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689620383352343836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113be5574875379c132a166f334b77452623787af9883412b43730bc79476120,PodSandboxId:a8da5d63bd2d57b928c0ba7c4a504d6848be0e35e93d36f60268c32cadc80dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689620383170394841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 311d51cbd29d6849e1eae0e8be2b5e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 54bc8ddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0fd53056-f872-4472-bbda-87884ac93eb7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.911331870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=01857b68-8bcb-4671-aba3-6045304ad144 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.911405251Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=01857b68-8bcb-4671-aba3-6045304ad144 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.911745044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d2c3d1a6e05f5050053c10fb9be0783e8aaa2d1857b472d4e26102e61126982,PodSandboxId:06530ad76b0cd7076d601410beca1f85f367e3f5f2056c5726479f7f93a95c80,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689620621679144099,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-tq4hc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08481f14-f670-4088-ad2f-44dfc71ffa1e,},Annotations:map[string]string{io.kubernetes.container.hash: 1edfe5cd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02145f263c40ac364530053f693689b4353315e2071f17ffb4892bb81d7be79b,PodSandboxId:d202c8dd3176c8a7722dfd85faf3a2ec0e1ae251bef45eaeac1cb1b02423df4a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689620481301417542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c50750ac-5831-4d65-94e3-c90c66a1eaae,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 10617d09,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00214624c3b5721ec913981f76a4f3d64fe73f6f6abc64af638e16b3238a229a,PodSandboxId:82c1f363eb0c5e07a1b0e9c9a9b180edbb3688018c7f90aa4bd8dc86b92780e0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689620461765731828,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6p4x6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2d8c75d9-4f0c-4089-b914-b962d27b872a,},Annotations:map[string]string{io.kubernetes.container.hash: 92da1f6a,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:352382e8bc7fc45d89a9e6284bd5941d06f78167666204c3e8978c5d6023da69,PodSandboxId:aec47474b4fd87cb391949d1b2c4bfaed1e844ac4129afda6313a51cf0b499f1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452504147936,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q7mkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4dc48604-2d11-4714-95bf-3c4102c37863,},Annotations:map[string]string{io.kubernetes.container.hash: 30f4c6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8b4e235c96b60560b65ec040d751fc9849b1690f0350675632db1084b911b3,PodSandboxId:eb29123bf5b0434fc52937eed9d010c4f32be45fa9b78553e324b133bef274c7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452333632934,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nrnbd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 18a1bea5-ae48-41a1-ad08-c974f450856f,},Annotations:map[string]string{io.kubernetes.container.hash: 720542bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac4ab34b6f8547ae47be71931004bee53e3f8a024d5f41b95781a6c20ac1640,PodSandboxId:c39575fb72b2428c6bb97332c691615fbf60a8ebbccbbd97d654a5d3e4a7bcc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689620409865556094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd620fe3-9f2f-453b-84b4-764f4f61ca5c,},Annotations:map[string]string{io.kubernetes.container.hash: 734f5e2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78743d1ce2adb97cac01af8bc0dd05b9b1912d19ce27d7de72be760af296784a,PodSandboxId:e926497311c1cbdd9c82b9829f46d6c8e4474ca1b083853f057320bdd7f40c3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689620409301068867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqhr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60604053-13c2-41dd-afc0-e7c6fe02c564,},Annotations:map[string]string{io.kubernetes.container.hash: a2a589eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca8adfeba943bdeb2328c2192e2e9e21b69a34e28713c2ed3493403d86105205,PodSandboxId:d4ad6f5eecb813c5d56af60277babddd8bd298c370c8eccb0d358d8d407567b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408525587859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-jtbmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec868a37-f17a-4a66-bf44-7b6f3d009f29,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af39171495917947445c243ee1154b693137f097752ea143aeda966ea8a85ae,Pod
SandboxId:fb34f49c189449709b428086df82dcf9afcb40babba4d51ca0cc39de73d53ffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408553521700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xgpjw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5c31f8-7623-44bf-b725-2261362173dd,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702e5660b3b165fba93073e44476a48885831ed2cc7facc23c95cca342ef5113,PodSandboxId:097d1750f3cd63914d78e1135c7b423494c5a2c60a58a28e17e29ba20cd1f51f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689620384814484441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4020d4221a809345951a8526bf5735a7,},Annotations:map[string]string{io.kubernetes.container.hash: d4c104a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f78582daaa86aa1b9a47c6333087ee63511f484d092a14b675686e483b4fc60,PodSandboxId:d83bf5c9b4cf4e86a05aec9f7056590cbc81da2df97f84e301e286ef56347bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689620383470974942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf81fc6cf537aca2df03dfe654e45f8a2e8bccba0290de6abc7b4a92e22708ac,PodSandboxId:a877732ef74def8a8ce3e8dca128802237195ac51a7bfbf0c17383dbb239950f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689620383352343836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113be5574875379c132a166f334b77452623787af9883412b43730bc79476120,PodSandboxId:a8da5d63bd2d57b928c0ba7c4a504d6848be0e35e93d36f60268c32cadc80dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689620383170394841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 311d51cbd29d6849e1eae0e8be2b5e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 54bc8ddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=01857b68-8bcb-4671-aba3-6045304ad144 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.951860204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=180611d8-ec20-4b49-8ada-c6f99fa87b96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.952086682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=180611d8-ec20-4b49-8ada-c6f99fa87b96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.952528233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d2c3d1a6e05f5050053c10fb9be0783e8aaa2d1857b472d4e26102e61126982,PodSandboxId:06530ad76b0cd7076d601410beca1f85f367e3f5f2056c5726479f7f93a95c80,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689620621679144099,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-tq4hc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08481f14-f670-4088-ad2f-44dfc71ffa1e,},Annotations:map[string]string{io.kubernetes.container.hash: 1edfe5cd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02145f263c40ac364530053f693689b4353315e2071f17ffb4892bb81d7be79b,PodSandboxId:d202c8dd3176c8a7722dfd85faf3a2ec0e1ae251bef45eaeac1cb1b02423df4a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689620481301417542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c50750ac-5831-4d65-94e3-c90c66a1eaae,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 10617d09,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00214624c3b5721ec913981f76a4f3d64fe73f6f6abc64af638e16b3238a229a,PodSandboxId:82c1f363eb0c5e07a1b0e9c9a9b180edbb3688018c7f90aa4bd8dc86b92780e0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689620461765731828,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6p4x6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2d8c75d9-4f0c-4089-b914-b962d27b872a,},Annotations:map[string]string{io.kubernetes.container.hash: 92da1f6a,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:352382e8bc7fc45d89a9e6284bd5941d06f78167666204c3e8978c5d6023da69,PodSandboxId:aec47474b4fd87cb391949d1b2c4bfaed1e844ac4129afda6313a51cf0b499f1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452504147936,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q7mkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4dc48604-2d11-4714-95bf-3c4102c37863,},Annotations:map[string]string{io.kubernetes.container.hash: 30f4c6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8b4e235c96b60560b65ec040d751fc9849b1690f0350675632db1084b911b3,PodSandboxId:eb29123bf5b0434fc52937eed9d010c4f32be45fa9b78553e324b133bef274c7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452333632934,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nrnbd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 18a1bea5-ae48-41a1-ad08-c974f450856f,},Annotations:map[string]string{io.kubernetes.container.hash: 720542bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac4ab34b6f8547ae47be71931004bee53e3f8a024d5f41b95781a6c20ac1640,PodSandboxId:c39575fb72b2428c6bb97332c691615fbf60a8ebbccbbd97d654a5d3e4a7bcc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689620409865556094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd620fe3-9f2f-453b-84b4-764f4f61ca5c,},Annotations:map[string]string{io.kubernetes.container.hash: 734f5e2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78743d1ce2adb97cac01af8bc0dd05b9b1912d19ce27d7de72be760af296784a,PodSandboxId:e926497311c1cbdd9c82b9829f46d6c8e4474ca1b083853f057320bdd7f40c3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689620409301068867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqhr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60604053-13c2-41dd-afc0-e7c6fe02c564,},Annotations:map[string]string{io.kubernetes.container.hash: a2a589eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca8adfeba943bdeb2328c2192e2e9e21b69a34e28713c2ed3493403d86105205,PodSandboxId:d4ad6f5eecb813c5d56af60277babddd8bd298c370c8eccb0d358d8d407567b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408525587859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-jtbmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec868a37-f17a-4a66-bf44-7b6f3d009f29,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af39171495917947445c243ee1154b693137f097752ea143aeda966ea8a85ae,Pod
SandboxId:fb34f49c189449709b428086df82dcf9afcb40babba4d51ca0cc39de73d53ffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408553521700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xgpjw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5c31f8-7623-44bf-b725-2261362173dd,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702e5660b3b165fba93073e44476a48885831ed2cc7facc23c95cca342ef5113,PodSandboxId:097d1750f3cd63914d78e1135c7b423494c5a2c60a58a28e17e29ba20cd1f51f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689620384814484441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4020d4221a809345951a8526bf5735a7,},Annotations:map[string]string{io.kubernetes.container.hash: d4c104a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f78582daaa86aa1b9a47c6333087ee63511f484d092a14b675686e483b4fc60,PodSandboxId:d83bf5c9b4cf4e86a05aec9f7056590cbc81da2df97f84e301e286ef56347bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689620383470974942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf81fc6cf537aca2df03dfe654e45f8a2e8bccba0290de6abc7b4a92e22708ac,PodSandboxId:a877732ef74def8a8ce3e8dca128802237195ac51a7bfbf0c17383dbb239950f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689620383352343836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113be5574875379c132a166f334b77452623787af9883412b43730bc79476120,PodSandboxId:a8da5d63bd2d57b928c0ba7c4a504d6848be0e35e93d36f60268c32cadc80dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689620383170394841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 311d51cbd29d6849e1eae0e8be2b5e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 54bc8ddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=180611d8-ec20-4b49-8ada-c6f99fa87b96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.992402007Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=408b3bb6-b6c9-47f4-8b04-572202bba329 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.992474635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=408b3bb6-b6c9-47f4-8b04-572202bba329 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:51 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:51.992862126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d2c3d1a6e05f5050053c10fb9be0783e8aaa2d1857b472d4e26102e61126982,PodSandboxId:06530ad76b0cd7076d601410beca1f85f367e3f5f2056c5726479f7f93a95c80,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689620621679144099,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-tq4hc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08481f14-f670-4088-ad2f-44dfc71ffa1e,},Annotations:map[string]string{io.kubernetes.container.hash: 1edfe5cd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02145f263c40ac364530053f693689b4353315e2071f17ffb4892bb81d7be79b,PodSandboxId:d202c8dd3176c8a7722dfd85faf3a2ec0e1ae251bef45eaeac1cb1b02423df4a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689620481301417542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c50750ac-5831-4d65-94e3-c90c66a1eaae,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 10617d09,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00214624c3b5721ec913981f76a4f3d64fe73f6f6abc64af638e16b3238a229a,PodSandboxId:82c1f363eb0c5e07a1b0e9c9a9b180edbb3688018c7f90aa4bd8dc86b92780e0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689620461765731828,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6p4x6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2d8c75d9-4f0c-4089-b914-b962d27b872a,},Annotations:map[string]string{io.kubernetes.container.hash: 92da1f6a,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:352382e8bc7fc45d89a9e6284bd5941d06f78167666204c3e8978c5d6023da69,PodSandboxId:aec47474b4fd87cb391949d1b2c4bfaed1e844ac4129afda6313a51cf0b499f1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452504147936,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q7mkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4dc48604-2d11-4714-95bf-3c4102c37863,},Annotations:map[string]string{io.kubernetes.container.hash: 30f4c6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8b4e235c96b60560b65ec040d751fc9849b1690f0350675632db1084b911b3,PodSandboxId:eb29123bf5b0434fc52937eed9d010c4f32be45fa9b78553e324b133bef274c7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452333632934,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nrnbd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 18a1bea5-ae48-41a1-ad08-c974f450856f,},Annotations:map[string]string{io.kubernetes.container.hash: 720542bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac4ab34b6f8547ae47be71931004bee53e3f8a024d5f41b95781a6c20ac1640,PodSandboxId:c39575fb72b2428c6bb97332c691615fbf60a8ebbccbbd97d654a5d3e4a7bcc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689620409865556094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd620fe3-9f2f-453b-84b4-764f4f61ca5c,},Annotations:map[string]string{io.kubernetes.container.hash: 734f5e2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78743d1ce2adb97cac01af8bc0dd05b9b1912d19ce27d7de72be760af296784a,PodSandboxId:e926497311c1cbdd9c82b9829f46d6c8e4474ca1b083853f057320bdd7f40c3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689620409301068867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqhr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60604053-13c2-41dd-afc0-e7c6fe02c564,},Annotations:map[string]string{io.kubernetes.container.hash: a2a589eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca8adfeba943bdeb2328c2192e2e9e21b69a34e28713c2ed3493403d86105205,PodSandboxId:d4ad6f5eecb813c5d56af60277babddd8bd298c370c8eccb0d358d8d407567b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408525587859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-jtbmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec868a37-f17a-4a66-bf44-7b6f3d009f29,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af39171495917947445c243ee1154b693137f097752ea143aeda966ea8a85ae,Pod
SandboxId:fb34f49c189449709b428086df82dcf9afcb40babba4d51ca0cc39de73d53ffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408553521700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xgpjw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5c31f8-7623-44bf-b725-2261362173dd,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702e5660b3b165fba93073e44476a48885831ed2cc7facc23c95cca342ef5113,PodSandboxId:097d1750f3cd63914d78e1135c7b423494c5a2c60a58a28e17e29ba20cd1f51f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689620384814484441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4020d4221a809345951a8526bf5735a7,},Annotations:map[string]string{io.kubernetes.container.hash: d4c104a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f78582daaa86aa1b9a47c6333087ee63511f484d092a14b675686e483b4fc60,PodSandboxId:d83bf5c9b4cf4e86a05aec9f7056590cbc81da2df97f84e301e286ef56347bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689620383470974942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf81fc6cf537aca2df03dfe654e45f8a2e8bccba0290de6abc7b4a92e22708ac,PodSandboxId:a877732ef74def8a8ce3e8dca128802237195ac51a7bfbf0c17383dbb239950f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689620383352343836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113be5574875379c132a166f334b77452623787af9883412b43730bc79476120,PodSandboxId:a8da5d63bd2d57b928c0ba7c4a504d6848be0e35e93d36f60268c32cadc80dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689620383170394841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 311d51cbd29d6849e1eae0e8be2b5e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 54bc8ddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=408b3bb6-b6c9-47f4-8b04-572202bba329 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:52 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:52.033707551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=afea6265-288e-4472-8d0b-d35635348957 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:52 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:52.033776212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=afea6265-288e-4472-8d0b-d35635348957 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:52 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:52.034228859Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d2c3d1a6e05f5050053c10fb9be0783e8aaa2d1857b472d4e26102e61126982,PodSandboxId:06530ad76b0cd7076d601410beca1f85f367e3f5f2056c5726479f7f93a95c80,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689620621679144099,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-tq4hc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08481f14-f670-4088-ad2f-44dfc71ffa1e,},Annotations:map[string]string{io.kubernetes.container.hash: 1edfe5cd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02145f263c40ac364530053f693689b4353315e2071f17ffb4892bb81d7be79b,PodSandboxId:d202c8dd3176c8a7722dfd85faf3a2ec0e1ae251bef45eaeac1cb1b02423df4a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689620481301417542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c50750ac-5831-4d65-94e3-c90c66a1eaae,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 10617d09,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00214624c3b5721ec913981f76a4f3d64fe73f6f6abc64af638e16b3238a229a,PodSandboxId:82c1f363eb0c5e07a1b0e9c9a9b180edbb3688018c7f90aa4bd8dc86b92780e0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689620461765731828,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6p4x6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2d8c75d9-4f0c-4089-b914-b962d27b872a,},Annotations:map[string]string{io.kubernetes.container.hash: 92da1f6a,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:352382e8bc7fc45d89a9e6284bd5941d06f78167666204c3e8978c5d6023da69,PodSandboxId:aec47474b4fd87cb391949d1b2c4bfaed1e844ac4129afda6313a51cf0b499f1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452504147936,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q7mkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4dc48604-2d11-4714-95bf-3c4102c37863,},Annotations:map[string]string{io.kubernetes.container.hash: 30f4c6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8b4e235c96b60560b65ec040d751fc9849b1690f0350675632db1084b911b3,PodSandboxId:eb29123bf5b0434fc52937eed9d010c4f32be45fa9b78553e324b133bef274c7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452333632934,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nrnbd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 18a1bea5-ae48-41a1-ad08-c974f450856f,},Annotations:map[string]string{io.kubernetes.container.hash: 720542bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac4ab34b6f8547ae47be71931004bee53e3f8a024d5f41b95781a6c20ac1640,PodSandboxId:c39575fb72b2428c6bb97332c691615fbf60a8ebbccbbd97d654a5d3e4a7bcc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689620409865556094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd620fe3-9f2f-453b-84b4-764f4f61ca5c,},Annotations:map[string]string{io.kubernetes.container.hash: 734f5e2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78743d1ce2adb97cac01af8bc0dd05b9b1912d19ce27d7de72be760af296784a,PodSandboxId:e926497311c1cbdd9c82b9829f46d6c8e4474ca1b083853f057320bdd7f40c3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689620409301068867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqhr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60604053-13c2-41dd-afc0-e7c6fe02c564,},Annotations:map[string]string{io.kubernetes.container.hash: a2a589eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca8adfeba943bdeb2328c2192e2e9e21b69a34e28713c2ed3493403d86105205,PodSandboxId:d4ad6f5eecb813c5d56af60277babddd8bd298c370c8eccb0d358d8d407567b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408525587859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-jtbmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec868a37-f17a-4a66-bf44-7b6f3d009f29,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af39171495917947445c243ee1154b693137f097752ea143aeda966ea8a85ae,Pod
SandboxId:fb34f49c189449709b428086df82dcf9afcb40babba4d51ca0cc39de73d53ffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408553521700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xgpjw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5c31f8-7623-44bf-b725-2261362173dd,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702e5660b3b165fba93073e44476a48885831ed2cc7facc23c95cca342ef5113,PodSandboxId:097d1750f3cd63914d78e1135c7b423494c5a2c60a58a28e17e29ba20cd1f51f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689620384814484441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4020d4221a809345951a8526bf5735a7,},Annotations:map[string]string{io.kubernetes.container.hash: d4c104a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f78582daaa86aa1b9a47c6333087ee63511f484d092a14b675686e483b4fc60,PodSandboxId:d83bf5c9b4cf4e86a05aec9f7056590cbc81da2df97f84e301e286ef56347bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689620383470974942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf81fc6cf537aca2df03dfe654e45f8a2e8bccba0290de6abc7b4a92e22708ac,PodSandboxId:a877732ef74def8a8ce3e8dca128802237195ac51a7bfbf0c17383dbb239950f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689620383352343836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113be5574875379c132a166f334b77452623787af9883412b43730bc79476120,PodSandboxId:a8da5d63bd2d57b928c0ba7c4a504d6848be0e35e93d36f60268c32cadc80dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689620383170394841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 311d51cbd29d6849e1eae0e8be2b5e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 54bc8ddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=afea6265-288e-4472-8d0b-d35635348957 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:52 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:52.067565717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=79d36a15-3682-4848-8722-4848c29d2095 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:52 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:52.067633076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=79d36a15-3682-4848-8722-4848c29d2095 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:03:52 ingress-addon-legacy-946642 crio[719]: time="2023-07-17 19:03:52.068156208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d2c3d1a6e05f5050053c10fb9be0783e8aaa2d1857b472d4e26102e61126982,PodSandboxId:06530ad76b0cd7076d601410beca1f85f367e3f5f2056c5726479f7f93a95c80,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689620621679144099,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-tq4hc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08481f14-f670-4088-ad2f-44dfc71ffa1e,},Annotations:map[string]string{io.kubernetes.container.hash: 1edfe5cd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02145f263c40ac364530053f693689b4353315e2071f17ffb4892bb81d7be79b,PodSandboxId:d202c8dd3176c8a7722dfd85faf3a2ec0e1ae251bef45eaeac1cb1b02423df4a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689620481301417542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c50750ac-5831-4d65-94e3-c90c66a1eaae,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 10617d09,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00214624c3b5721ec913981f76a4f3d64fe73f6f6abc64af638e16b3238a229a,PodSandboxId:82c1f363eb0c5e07a1b0e9c9a9b180edbb3688018c7f90aa4bd8dc86b92780e0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689620461765731828,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6p4x6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2d8c75d9-4f0c-4089-b914-b962d27b872a,},Annotations:map[string]string{io.kubernetes.container.hash: 92da1f6a,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:352382e8bc7fc45d89a9e6284bd5941d06f78167666204c3e8978c5d6023da69,PodSandboxId:aec47474b4fd87cb391949d1b2c4bfaed1e844ac4129afda6313a51cf0b499f1,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452504147936,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q7mkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4dc48604-2d11-4714-95bf-3c4102c37863,},Annotations:map[string]string{io.kubernetes.container.hash: 30f4c6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8b4e235c96b60560b65ec040d751fc9849b1690f0350675632db1084b911b3,PodSandboxId:eb29123bf5b0434fc52937eed9d010c4f32be45fa9b78553e324b133bef274c7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689620452333632934,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nrnbd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 18a1bea5-ae48-41a1-ad08-c974f450856f,},Annotations:map[string]string{io.kubernetes.container.hash: 720542bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac4ab34b6f8547ae47be71931004bee53e3f8a024d5f41b95781a6c20ac1640,PodSandboxId:c39575fb72b2428c6bb97332c691615fbf60a8ebbccbbd97d654a5d3e4a7bcc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689620409865556094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd620fe3-9f2f-453b-84b4-764f4f61ca5c,},Annotations:map[string]string{io.kubernetes.container.hash: 734f5e2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78743d1ce2adb97cac01af8bc0dd05b9b1912d19ce27d7de72be760af296784a,PodSandboxId:e926497311c1cbdd9c82b9829f46d6c8e4474ca1b083853f057320bdd7f40c3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689620409301068867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sqhr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60604053-13c2-41dd-afc0-e7c6fe02c564,},Annotations:map[string]string{io.kubernetes.container.hash: a2a589eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca8adfeba943bdeb2328c2192e2e9e21b69a34e28713c2ed3493403d86105205,PodSandboxId:d4ad6f5eecb813c5d56af60277babddd8bd298c370c8eccb0d358d8d407567b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408525587859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-jtbmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec868a37-f17a-4a66-bf44-7b6f3d009f29,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af39171495917947445c243ee1154b693137f097752ea143aeda966ea8a85ae,Pod
SandboxId:fb34f49c189449709b428086df82dcf9afcb40babba4d51ca0cc39de73d53ffb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689620408553521700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xgpjw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5c31f8-7623-44bf-b725-2261362173dd,},Annotations:map[string]string{io.kubernetes.container.hash: 81eb2838,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:702e5660b3b165fba93073e44476a48885831ed2cc7facc23c95cca342ef5113,PodSandboxId:097d1750f3cd63914d78e1135c7b423494c5a2c60a58a28e17e29ba20cd1f51f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689620384814484441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4020d4221a809345951a8526bf5735a7,},Annotations:map[string]string{io.kubernetes.container.hash: d4c104a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f78582daaa86aa1b9a47c6333087ee63511f484d092a14b675686e483b4fc60,PodSandboxId:d83bf5c9b4cf4e86a05aec9f7056590cbc81da2df97f84e301e286ef56347bae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689620383470974942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf81fc6cf537aca2df03dfe654e45f8a2e8bccba0290de6abc7b4a92e22708ac,PodSandboxId:a877732ef74def8a8ce3e8dca128802237195ac51a7bfbf0c17383dbb239950f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689620383352343836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:113be5574875379c132a166f334b77452623787af9883412b43730bc79476120,PodSandboxId:a8da5d63bd2d57b928c0ba7c4a504d6848be0e35e93d36f60268c32cadc80dea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689620383170394841,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-946642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 311d51cbd29d6849e1eae0e8be2b5e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 54bc8ddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=79d36a15-3682-4848-8722-4848c29d2095 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	1d2c3d1a6e05f       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea            10 seconds ago      Running             hello-world-app           0                   06530ad76b0cd
	02145f263c40a       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                    2 minutes ago       Running             nginx                     0                   d202c8dd3176c
	00214624c3b57       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   82c1f363eb0c5
	352382e8bc7fc       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              patch                     0                   aec47474b4fd8
	8f8b4e235c96b       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              create                    0                   eb29123bf5b04
	4ac4ab34b6f85       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   c39575fb72b24
	78743d1ce2adb       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   e926497311c1c
	8af3917149591       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   fb34f49c18944
	ca8adfeba943b       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   d4ad6f5eecb81
	702e5660b3b16       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   097d1750f3cd6
	0f78582daaa86       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   d83bf5c9b4cf4
	bf81fc6cf537a       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   a877732ef74de
	113be55748753       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   a8da5d63bd2d5
	
	* 
	* ==> coredns [8af39171495917947445c243ee1154b693137f097752ea143aeda966ea8a85ae] <==
	* CoreDNS-1.6.7
	linux/amd64, go1.13.6, da7f65b
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 6dca4351036a5cca7eefa7c93a3dea30
	[INFO] Reloading complete
	[INFO] 10.244.0.6:58824 - 23038 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000577502s
	[INFO] 10.244.0.6:58824 - 15691 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000253435s
	[INFO] 10.244.0.6:58824 - 2466 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000141339s
	[INFO] 10.244.0.6:58824 - 46559 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000172884s
	[INFO] 10.244.0.6:58824 - 28445 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000102912s
	[INFO] 10.244.0.6:58824 - 45178 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000131363s
	[INFO] 10.244.0.6:58824 - 43833 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000174104s
	I0717 19:00:38.948253       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-07-17 19:00:08.947359654 +0000 UTC m=+0.088580838) (total time: 30.000806379s):
	Trace[2019727887]: [30.000806379s] [30.000806379s] END
	E0717 19:00:38.948338       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0717 19:00:38.950230       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-07-17 19:00:08.94928265 +0000 UTC m=+0.090503838) (total time: 30.000927821s):
	Trace[1427131847]: [30.000927821s] [30.000927821s] END
	E0717 19:00:38.950273       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0717 19:00:38.950377       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2023-07-17 19:00:08.949548278 +0000 UTC m=+0.090769466) (total time: 30.000812736s):
	Trace[939984059]: [30.000812736s] [30.000812736s] END
	E0717 19:00:38.950402       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [ca8adfeba943bdeb2328c2192e2e9e21b69a34e28713c2ed3493403d86105205] <==
	* [INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 6dca4351036a5cca7eefa7c93a3dea30
	[INFO] Reloading complete
	[INFO] 127.0.0.1:56222 - 4554 "HINFO IN 1430029716607735760.3907007083577116224. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009101332s
	[INFO] 10.244.0.6:36471 - 613 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000545529s
	[INFO] 10.244.0.6:36471 - 55090 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000293301s
	[INFO] 10.244.0.6:36471 - 17542 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000157025s
	[INFO] 10.244.0.6:36471 - 40595 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000139346s
	[INFO] 10.244.0.6:36471 - 36488 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000188703s
	[INFO] 10.244.0.6:36471 - 13942 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000180201s
	[INFO] 10.244.0.6:36471 - 46627 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000122614s
	[INFO] 10.244.0.6:36554 - 65388 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000101384s
	[INFO] 10.244.0.6:54967 - 25597 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000032162s
	[INFO] 10.244.0.6:54967 - 54451 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000047392s
	[INFO] 10.244.0.6:36554 - 47588 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000231394s
	[INFO] 10.244.0.6:54967 - 64911 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000089852s
	[INFO] 10.244.0.6:36554 - 65014 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000066061s
	[INFO] 10.244.0.6:36554 - 7535 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068012s
	[INFO] 10.244.0.6:54967 - 23366 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00012006s
	[INFO] 10.244.0.6:36554 - 45340 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063837s
	[INFO] 10.244.0.6:36554 - 12595 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006819s
	[INFO] 10.244.0.6:54967 - 25852 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067919s
	[INFO] 10.244.0.6:36554 - 29546 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000157022s
	[INFO] 10.244.0.6:54967 - 20419 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058134s
	[INFO] 10.244.0.6:54967 - 28083 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000143184s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-946642
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-946642
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=ingress-addon-legacy-946642
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T18_59_52_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 18:59:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-946642
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 19:03:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 19:01:22 +0000   Mon, 17 Jul 2023 18:59:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 19:01:22 +0000   Mon, 17 Jul 2023 18:59:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 19:01:22 +0000   Mon, 17 Jul 2023 18:59:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 19:01:22 +0000   Mon, 17 Jul 2023 19:00:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    ingress-addon-legacy-946642
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea73ed2fa34849508e9fcfd487556b06
	  System UUID:                ea73ed2f-a348-4950-8e9f-cfd487556b06
	  Boot ID:                    f8b88c73-d268-4b98-9fe0-a10c70a6abc4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-tq4hc                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 coredns-66bff467f8-jtbmr                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m45s
	  kube-system                 coredns-66bff467f8-xgpjw                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m45s
	  kube-system                 etcd-ingress-addon-legacy-946642                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-apiserver-ingress-addon-legacy-946642             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-controller-manager-ingress-addon-legacy-946642    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-proxy-sqhr8                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 kube-scheduler-ingress-addon-legacy-946642             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             140Mi (3%!)(MISSING)  340Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m11s (x6 over 4m11s)  kubelet     Node ingress-addon-legacy-946642 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x5 over 4m11s)  kubelet     Node ingress-addon-legacy-946642 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x5 over 4m11s)  kubelet     Node ingress-addon-legacy-946642 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m                     kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m                     kubelet     Node ingress-addon-legacy-946642 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m                     kubelet     Node ingress-addon-legacy-946642 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m                     kubelet     Node ingress-addon-legacy-946642 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m                     kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m50s                  kubelet     Node ingress-addon-legacy-946642 status is now: NodeReady
	  Normal  Starting                 3m43s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jul17 18:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.100070] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.475756] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.717049] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.144229] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.095756] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.546928] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.120825] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.156696] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.126066] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.246197] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[  +8.229275] systemd-fstab-generator[1033]: Ignoring "noauto" for root device
	[  +3.165989] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +10.059424] systemd-fstab-generator[1439]: Ignoring "noauto" for root device
	[Jul17 19:00] kauditd_printk_skb: 6 callbacks suppressed
	[ +34.165339] kauditd_printk_skb: 16 callbacks suppressed
	[ +10.982987] kauditd_printk_skb: 6 callbacks suppressed
	[Jul17 19:01] kauditd_printk_skb: 7 callbacks suppressed
	[Jul17 19:03] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.169767] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [702e5660b3b165fba93073e44476a48885831ed2cc7facc23c95cca342ef5113] <==
	* raft2023/07/17 18:59:44 INFO: 9d86f3f40f3d97f5 switched to configuration voters=(11351028140387178485)
	2023-07-17 18:59:44.998253 W | auth: simple token is not cryptographically signed
	2023-07-17 18:59:45.004467 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2023/07/17 18:59:45 INFO: 9d86f3f40f3d97f5 switched to configuration voters=(11351028140387178485)
	2023-07-17 18:59:45.008083 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-17 18:59:45.008246 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-17 18:59:45.008284 I | etcdserver: 9d86f3f40f3d97f5 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-07-17 18:59:45.008383 I | embed: listening for peers on 192.168.39.20:2380
	2023-07-17 18:59:45.008459 I | etcdserver/membership: added member 9d86f3f40f3d97f5 [https://192.168.39.20:2380] to cluster e50fb330f7c278b
	raft2023/07/17 18:59:45 INFO: 9d86f3f40f3d97f5 is starting a new election at term 1
	raft2023/07/17 18:59:45 INFO: 9d86f3f40f3d97f5 became candidate at term 2
	raft2023/07/17 18:59:45 INFO: 9d86f3f40f3d97f5 received MsgVoteResp from 9d86f3f40f3d97f5 at term 2
	raft2023/07/17 18:59:45 INFO: 9d86f3f40f3d97f5 became leader at term 2
	raft2023/07/17 18:59:45 INFO: raft.node: 9d86f3f40f3d97f5 elected leader 9d86f3f40f3d97f5 at term 2
	2023-07-17 18:59:45.487312 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-17 18:59:45.487477 I | etcdserver: published {Name:ingress-addon-legacy-946642 ClientURLs:[https://192.168.39.20:2379]} to cluster e50fb330f7c278b
	2023-07-17 18:59:45.487807 I | embed: ready to serve client requests
	2023-07-17 18:59:45.488177 I | embed: ready to serve client requests
	2023-07-17 18:59:45.489738 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-17 18:59:45.492374 I | embed: serving client requests on 192.168.39.20:2379
	2023-07-17 18:59:45.503008 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-17 18:59:45.503124 I | etcdserver/api: enabled capabilities for version 3.4
	2023-07-17 19:00:06.868385 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/expand-controller\" " with result "range_response_count:0 size:5" took too long (347.135428ms) to execute
	2023-07-17 19:00:08.754299 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-946642\" " with result "range_response_count:1 size:6295" took too long (141.021472ms) to execute
	2023-07-17 19:01:54.896754 W | etcdserver: read-only range request "key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true " with result "range_response_count:0 size:5" took too long (103.095623ms) to execute
	
	* 
	* ==> kernel <==
	*  19:03:52 up 4 min,  0 users,  load average: 0.30, 0.52, 0.26
	Linux ingress-addon-legacy-946642 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [113be5574875379c132a166f334b77452623787af9883412b43730bc79476120] <==
	* I0717 18:59:48.637162       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0717 18:59:48.649305       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.20, ResourceVersion: 0, AdditionalErrorMsg: 
	I0717 18:59:48.693255       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0717 18:59:48.694469       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 18:59:48.776312       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 18:59:48.776680       1 cache.go:39] Caches are synced for autoregister controller
	I0717 18:59:48.780007       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0717 18:59:49.573077       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0717 18:59:49.573215       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 18:59:49.589179       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0717 18:59:49.596508       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0717 18:59:49.596579       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0717 18:59:50.202537       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 18:59:50.278040       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0717 18:59:50.364591       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.20]
	I0717 18:59:50.365694       1 controller.go:609] quota admission added evaluator for: endpoints
	I0717 18:59:50.375561       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 18:59:50.942311       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0717 18:59:52.148563       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0717 18:59:52.241125       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0717 18:59:52.627817       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 19:00:07.312601       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0717 19:00:07.369337       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0717 19:00:50.029556       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0717 19:01:18.109868       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [0f78582daaa86aa1b9a47c6333087ee63511f484d092a14b675686e483b4fc60] <==
	* I0717 19:00:07.383891       1 shared_informer.go:230] Caches are synced for endpoint 
	I0717 19:00:07.386223       1 shared_informer.go:230] Caches are synced for disruption 
	I0717 19:00:07.386321       1 disruption.go:339] Sending events to api server.
	I0717 19:00:07.388523       1 shared_informer.go:230] Caches are synced for HPA 
	I0717 19:00:07.388693       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0717 19:00:07.389036       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0717 19:00:07.393292       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"7b169da8-9b5f-4d24-a759-2d1a7b81c620", APIVersion:"apps/v1", ResourceVersion:"224", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-sqhr8
	I0717 19:00:07.444551       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"ca1ea256-0527-499c-a731-df1015941d58", APIVersion:"apps/v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-xgpjw
	E0717 19:00:07.498056       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"7b169da8-9b5f-4d24-a759-2d1a7b81c620", ResourceVersion:"224", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63825217192, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0015bbd00), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc0015bbd20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0015bbd40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000e4ef00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc0015bbd60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0015bbe00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0015bbe40)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0005691d0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000b8dae8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000127030), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0000b3fd0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000b8db38)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0717 19:00:07.503204       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"ca1ea256-0527-499c-a731-df1015941d58", APIVersion:"apps/v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-jtbmr
	I0717 19:00:07.618079       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 19:00:07.670121       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 19:00:07.695480       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 19:00:07.695730       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 19:00:07.695764       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0717 19:00:50.009790       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"6f45d6fd-b370-49de-a518-5fc9b98a8891", APIVersion:"apps/v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0717 19:00:50.024562       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"f807ba73-3f27-464f-af8a-2e2d4902a2f2", APIVersion:"apps/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-6p4x6
	I0717 19:00:50.099315       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5bdac58f-e003-4f32-abb6-0ae53785dd4b", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-nrnbd
	I0717 19:00:50.213797       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"d95153bf-8d6e-480f-b5fa-750430a5f1bb", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-q7mkd
	I0717 19:00:53.006873       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"d95153bf-8d6e-480f-b5fa-750430a5f1bb", APIVersion:"batch/v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 19:00:53.066259       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5bdac58f-e003-4f32-abb6-0ae53785dd4b", APIVersion:"batch/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 19:03:39.235201       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"aa850718-b54d-4bc7-810c-b21b2926ba17", APIVersion:"apps/v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0717 19:03:39.240655       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"8001be72-add8-4e1c-9470-318af0ece0a1", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-tq4hc
	E0717 19:03:49.305783       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-h474r" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [78743d1ce2adb97cac01af8bc0dd05b9b1912d19ce27d7de72be760af296784a] <==
	* W0717 19:00:09.538765       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0717 19:00:09.549791       1 node.go:136] Successfully retrieved node IP: 192.168.39.20
	I0717 19:00:09.549888       1 server_others.go:186] Using iptables Proxier.
	I0717 19:00:09.550880       1 server.go:583] Version: v1.18.20
	I0717 19:00:09.559198       1 config.go:315] Starting service config controller
	I0717 19:00:09.559265       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0717 19:00:09.559311       1 config.go:133] Starting endpoints config controller
	I0717 19:00:09.559331       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0717 19:00:09.661283       1 shared_informer.go:230] Caches are synced for service config 
	I0717 19:00:09.661406       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [bf81fc6cf537aca2df03dfe654e45f8a2e8bccba0290de6abc7b4a92e22708ac] <==
	* E0717 18:59:48.713709       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:59:48.714353       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0717 18:59:48.715885       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 18:59:48.716008       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 18:59:48.716044       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0717 18:59:48.718545       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:59:48.718750       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:59:48.718838       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:59:48.720288       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:59:48.720400       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:59:48.720503       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:59:48.721468       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:59:48.721579       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:59:48.722354       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:59:49.544238       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:59:49.550575       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:59:49.562881       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:59:49.668556       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:59:49.707758       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:59:49.714673       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:59:49.857687       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:59:49.862579       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:59:49.935763       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:59:50.002463       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0717 18:59:52.316282       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 18:59:16 UTC, ends at Mon 2023-07-17 19:03:52 UTC. --
	Jul 17 19:01:03 ingress-addon-legacy-946642 kubelet[1445]: E0717 19:01:03.508178    1445 reflector.go:178] object-"kube-system"/"minikube-ingress-dns-token-s59bt": Failed to list *v1.Secret: secrets "minikube-ingress-dns-token-s59bt" is forbidden: User "system:node:ingress-addon-legacy-946642" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "ingress-addon-legacy-946642" and this object
	Jul 17 19:01:03 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:01:03.697022    1445 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-s59bt" (UniqueName: "kubernetes.io/secret/307442c4-01e5-42ab-a759-18b99a81e7ce-minikube-ingress-dns-token-s59bt") pod "kube-ingress-dns-minikube" (UID: "307442c4-01e5-42ab-a759-18b99a81e7ce")
	Jul 17 19:01:18 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:01:18.301603    1445 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jul 17 19:01:18 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:01:18.453413    1445 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-z4tvx" (UniqueName: "kubernetes.io/secret/c50750ac-5831-4d65-94e3-c90c66a1eaae-default-token-z4tvx") pod "nginx" (UID: "c50750ac-5831-4d65-94e3-c90c66a1eaae")
	Jul 17 19:03:39 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:39.254437    1445 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jul 17 19:03:39 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:39.426778    1445 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-z4tvx" (UniqueName: "kubernetes.io/secret/08481f14-f670-4088-ad2f-44dfc71ffa1e-default-token-z4tvx") pod "hello-world-app-5f5d8b66bb-tq4hc" (UID: "08481f14-f670-4088-ad2f-44dfc71ffa1e")
	Jul 17 19:03:41 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:41.554406    1445 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: bf4be1d7c2b4276f2bfef70d7b19ee24d0a40e0400164a5032bfa2e900be3f4e
	Jul 17 19:03:41 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:41.612373    1445 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: bf4be1d7c2b4276f2bfef70d7b19ee24d0a40e0400164a5032bfa2e900be3f4e
	Jul 17 19:03:41 ingress-addon-legacy-946642 kubelet[1445]: E0717 19:03:41.613385    1445 remote_runtime.go:295] ContainerStatus "bf4be1d7c2b4276f2bfef70d7b19ee24d0a40e0400164a5032bfa2e900be3f4e" from runtime service failed: rpc error: code = NotFound desc = could not find container "bf4be1d7c2b4276f2bfef70d7b19ee24d0a40e0400164a5032bfa2e900be3f4e": container with ID starting with bf4be1d7c2b4276f2bfef70d7b19ee24d0a40e0400164a5032bfa2e900be3f4e not found: ID does not exist
	Jul 17 19:03:42 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:42.740180    1445 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-s59bt" (UniqueName: "kubernetes.io/secret/307442c4-01e5-42ab-a759-18b99a81e7ce-minikube-ingress-dns-token-s59bt") pod "307442c4-01e5-42ab-a759-18b99a81e7ce" (UID: "307442c4-01e5-42ab-a759-18b99a81e7ce")
	Jul 17 19:03:42 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:42.753161    1445 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/307442c4-01e5-42ab-a759-18b99a81e7ce-minikube-ingress-dns-token-s59bt" (OuterVolumeSpecName: "minikube-ingress-dns-token-s59bt") pod "307442c4-01e5-42ab-a759-18b99a81e7ce" (UID: "307442c4-01e5-42ab-a759-18b99a81e7ce"). InnerVolumeSpecName "minikube-ingress-dns-token-s59bt". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 19:03:42 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:42.840592    1445 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-s59bt" (UniqueName: "kubernetes.io/secret/307442c4-01e5-42ab-a759-18b99a81e7ce-minikube-ingress-dns-token-s59bt") on node "ingress-addon-legacy-946642" DevicePath ""
	Jul 17 19:03:44 ingress-addon-legacy-946642 kubelet[1445]: E0717 19:03:44.546709    1445 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-6p4x6.1772bd018af7a3b9", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-6p4x6", UID:"2d8c75d9-4f0c-4089-b914-b962d27b872a", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-946642"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1258204204603b9, ext:232509368412, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1258204204603b9, ext:232509368412, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-6p4x6.1772bd018af7a3b9" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 19:03:44 ingress-addon-legacy-946642 kubelet[1445]: E0717 19:03:44.565673    1445 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-6p4x6.1772bd018af7a3b9", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-6p4x6", UID:"2d8c75d9-4f0c-4089-b914-b962d27b872a", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-946642"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1258204204603b9, ext:232509368412, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12582042150ef62, ext:232526861314, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-6p4x6.1772bd018af7a3b9" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 19:03:47 ingress-addon-legacy-946642 kubelet[1445]: W0717 19:03:47.583519    1445 pod_container_deletor.go:77] Container "82c1f363eb0c5e07a1b0e9c9a9b180edbb3688018c7f90aa4bd8dc86b92780e0" not found in pod's containers
	Jul 17 19:03:48 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:48.763509    1445 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2d8c75d9-4f0c-4089-b914-b962d27b872a-webhook-cert") pod "2d8c75d9-4f0c-4089-b914-b962d27b872a" (UID: "2d8c75d9-4f0c-4089-b914-b962d27b872a")
	Jul 17 19:03:48 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:48.763621    1445 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-2wwb8" (UniqueName: "kubernetes.io/secret/2d8c75d9-4f0c-4089-b914-b962d27b872a-ingress-nginx-token-2wwb8") pod "2d8c75d9-4f0c-4089-b914-b962d27b872a" (UID: "2d8c75d9-4f0c-4089-b914-b962d27b872a")
	Jul 17 19:03:48 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:48.778518    1445 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d8c75d9-4f0c-4089-b914-b962d27b872a-ingress-nginx-token-2wwb8" (OuterVolumeSpecName: "ingress-nginx-token-2wwb8") pod "2d8c75d9-4f0c-4089-b914-b962d27b872a" (UID: "2d8c75d9-4f0c-4089-b914-b962d27b872a"). InnerVolumeSpecName "ingress-nginx-token-2wwb8". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 19:03:48 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:48.779723    1445 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d8c75d9-4f0c-4089-b914-b962d27b872a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2d8c75d9-4f0c-4089-b914-b962d27b872a" (UID: "2d8c75d9-4f0c-4089-b914-b962d27b872a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 19:03:48 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:48.864125    1445 reconciler.go:319] Volume detached for volume "ingress-nginx-token-2wwb8" (UniqueName: "kubernetes.io/secret/2d8c75d9-4f0c-4089-b914-b962d27b872a-ingress-nginx-token-2wwb8") on node "ingress-addon-legacy-946642" DevicePath ""
	Jul 17 19:03:48 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:48.864205    1445 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2d8c75d9-4f0c-4089-b914-b962d27b872a-webhook-cert") on node "ingress-addon-legacy-946642" DevicePath ""
	Jul 17 19:03:50 ingress-addon-legacy-946642 kubelet[1445]: W0717 19:03:50.780883    1445 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/2d8c75d9-4f0c-4089-b914-b962d27b872a/volumes" does not exist
	Jul 17 19:03:52 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:52.530712    1445 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 00214624c3b5721ec913981f76a4f3d64fe73f6f6abc64af638e16b3238a229a
	Jul 17 19:03:52 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:52.566421    1445 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 352382e8bc7fc45d89a9e6284bd5941d06f78167666204c3e8978c5d6023da69
	Jul 17 19:03:52 ingress-addon-legacy-946642 kubelet[1445]: I0717 19:03:52.607312    1445 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8f8b4e235c96b60560b65ec040d751fc9849b1690f0350675632db1084b911b3
	
	* 
	* ==> storage-provisioner [4ac4ab34b6f8547ae47be71931004bee53e3f8a024d5f41b95781a6c20ac1640] <==
	* I0717 19:00:10.021842       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:00:10.033101       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:00:10.033237       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 19:00:10.042852       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 19:00:10.043235       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-946642_ee05964a-520e-4cb4-aee1-82ff23b1feaa!
	I0717 19:00:10.045285       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0df6b549-c64c-4981-906b-ab02267f981d", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-946642_ee05964a-520e-4cb4-aee1-82ff23b1feaa became leader
	I0717 19:00:10.144050       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-946642_ee05964a-520e-4cb4-aee1-82ff23b1feaa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-946642 -n ingress-addon-legacy-946642
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-946642 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (169.81s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- exec busybox-67b7f59bb-bjpl2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- exec busybox-67b7f59bb-bjpl2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-464644 -- exec busybox-67b7f59bb-bjpl2 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (190.187751ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-67b7f59bb-bjpl2): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- exec busybox-67b7f59bb-jgj4t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- exec busybox-67b7f59bb-jgj4t -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-464644 -- exec busybox-67b7f59bb-jgj4t -- sh -c "ping -c 1 192.168.39.1": exit status 1 (190.300894ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-67b7f59bb-jgj4t): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-464644 -n multinode-464644
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-464644 logs -n 25: (1.384547866s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-795139 ssh -- ls                    | mount-start-2-795139 | jenkins | v1.30.1 | 17 Jul 23 19:08 UTC | 17 Jul 23 19:08 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-795139 ssh --                       | mount-start-2-795139 | jenkins | v1.30.1 | 17 Jul 23 19:08 UTC | 17 Jul 23 19:08 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-795139                           | mount-start-2-795139 | jenkins | v1.30.1 | 17 Jul 23 19:08 UTC | 17 Jul 23 19:08 UTC |
	| start   | -p mount-start-2-795139                           | mount-start-2-795139 | jenkins | v1.30.1 | 17 Jul 23 19:08 UTC | 17 Jul 23 19:09 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-795139 | jenkins | v1.30.1 | 17 Jul 23 19:09 UTC |                     |
	|         | --profile mount-start-2-795139                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-795139 ssh -- ls                    | mount-start-2-795139 | jenkins | v1.30.1 | 17 Jul 23 19:09 UTC | 17 Jul 23 19:09 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-795139 ssh --                       | mount-start-2-795139 | jenkins | v1.30.1 | 17 Jul 23 19:09 UTC | 17 Jul 23 19:09 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-795139                           | mount-start-2-795139 | jenkins | v1.30.1 | 17 Jul 23 19:09 UTC | 17 Jul 23 19:09 UTC |
	| delete  | -p mount-start-1-773422                           | mount-start-1-773422 | jenkins | v1.30.1 | 17 Jul 23 19:09 UTC | 17 Jul 23 19:09 UTC |
	| start   | -p multinode-464644                               | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:09 UTC | 17 Jul 23 19:11 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- apply -f                   | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- rollout                    | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- get pods -o                | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- get pods -o                | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- exec                       | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | busybox-67b7f59bb-bjpl2 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- exec                       | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | busybox-67b7f59bb-jgj4t --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- exec                       | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | busybox-67b7f59bb-bjpl2 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- exec                       | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | busybox-67b7f59bb-jgj4t --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- exec                       | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | busybox-67b7f59bb-bjpl2 -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- exec                       | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | busybox-67b7f59bb-jgj4t -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- get pods -o                | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- exec                       | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | busybox-67b7f59bb-bjpl2                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- exec                       | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC |                     |
	|         | busybox-67b7f59bb-bjpl2 -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- exec                       | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | busybox-67b7f59bb-jgj4t                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-464644 -- exec                       | multinode-464644     | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC |                     |
	|         | busybox-67b7f59bb-jgj4t -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 19:09:04
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:09:04.212862 1081367 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:09:04.213023 1081367 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:09:04.213033 1081367 out.go:309] Setting ErrFile to fd 2...
	I0717 19:09:04.213040 1081367 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:09:04.213260 1081367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:09:04.213935 1081367 out.go:303] Setting JSON to false
	I0717 19:09:04.214939 1081367 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13895,"bootTime":1689607049,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:09:04.215011 1081367 start.go:138] virtualization: kvm guest
	I0717 19:09:04.218225 1081367 out.go:177] * [multinode-464644] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:09:04.220705 1081367 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:09:04.220636 1081367 notify.go:220] Checking for updates...
	I0717 19:09:04.222725 1081367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:09:04.224635 1081367 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:09:04.226572 1081367 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:09:04.228728 1081367 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:09:04.230549 1081367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:09:04.232973 1081367 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:09:04.271853 1081367 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 19:09:04.273804 1081367 start.go:298] selected driver: kvm2
	I0717 19:09:04.273825 1081367 start.go:880] validating driver "kvm2" against <nil>
	I0717 19:09:04.273839 1081367 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:09:04.274565 1081367 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:09:04.274682 1081367 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:09:04.290916 1081367 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0717 19:09:04.290997 1081367 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 19:09:04.291249 1081367 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:09:04.291284 1081367 cni.go:84] Creating CNI manager for ""
	I0717 19:09:04.291291 1081367 cni.go:137] 0 nodes found, recommending kindnet
	I0717 19:09:04.291307 1081367 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 19:09:04.291322 1081367 start_flags.go:319] config:
	{Name:multinode-464644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-464644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugi
n:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:09:04.291529 1081367 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:09:04.294245 1081367 out.go:177] * Starting control plane node multinode-464644 in cluster multinode-464644
	I0717 19:09:04.296129 1081367 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:09:04.296196 1081367 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 19:09:04.296213 1081367 cache.go:57] Caching tarball of preloaded images
	I0717 19:09:04.296323 1081367 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:09:04.296336 1081367 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:09:04.296739 1081367 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/config.json ...
	I0717 19:09:04.296772 1081367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/config.json: {Name:mkb5c2661350a95e2be3d3aebd8506fca45c3c93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:09:04.296954 1081367 start.go:365] acquiring machines lock for multinode-464644: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:09:04.296990 1081367 start.go:369] acquired machines lock for "multinode-464644" in 18.73µs
	I0717 19:09:04.297020 1081367 start.go:93] Provisioning new machine with config: &{Name:multinode-464644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-4
64644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:09:04.297136 1081367 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 19:09:04.299644 1081367 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 19:09:04.299844 1081367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:09:04.299906 1081367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:09:04.315181 1081367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36611
	I0717 19:09:04.315709 1081367 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:09:04.316333 1081367 main.go:141] libmachine: Using API Version  1
	I0717 19:09:04.316364 1081367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:09:04.316724 1081367 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:09:04.316957 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetMachineName
	I0717 19:09:04.317129 1081367 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:09:04.317303 1081367 start.go:159] libmachine.API.Create for "multinode-464644" (driver="kvm2")
	I0717 19:09:04.317340 1081367 client.go:168] LocalClient.Create starting
	I0717 19:09:04.317381 1081367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem
	I0717 19:09:04.317470 1081367 main.go:141] libmachine: Decoding PEM data...
	I0717 19:09:04.317502 1081367 main.go:141] libmachine: Parsing certificate...
	I0717 19:09:04.317617 1081367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem
	I0717 19:09:04.317650 1081367 main.go:141] libmachine: Decoding PEM data...
	I0717 19:09:04.317676 1081367 main.go:141] libmachine: Parsing certificate...
	I0717 19:09:04.317708 1081367 main.go:141] libmachine: Running pre-create checks...
	I0717 19:09:04.317725 1081367 main.go:141] libmachine: (multinode-464644) Calling .PreCreateCheck
	I0717 19:09:04.318131 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetConfigRaw
	I0717 19:09:04.318567 1081367 main.go:141] libmachine: Creating machine...
	I0717 19:09:04.318584 1081367 main.go:141] libmachine: (multinode-464644) Calling .Create
	I0717 19:09:04.318751 1081367 main.go:141] libmachine: (multinode-464644) Creating KVM machine...
	I0717 19:09:04.320214 1081367 main.go:141] libmachine: (multinode-464644) DBG | found existing default KVM network
	I0717 19:09:04.321220 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:04.321026 1081390 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000298a0}
	I0717 19:09:04.327479 1081367 main.go:141] libmachine: (multinode-464644) DBG | trying to create private KVM network mk-multinode-464644 192.168.39.0/24...
	I0717 19:09:04.408488 1081367 main.go:141] libmachine: (multinode-464644) DBG | private KVM network mk-multinode-464644 192.168.39.0/24 created
	I0717 19:09:04.408549 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:04.408433 1081390 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:09:04.408568 1081367 main.go:141] libmachine: (multinode-464644) Setting up store path in /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644 ...
	I0717 19:09:04.408591 1081367 main.go:141] libmachine: (multinode-464644) Building disk image from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 19:09:04.408612 1081367 main.go:141] libmachine: (multinode-464644) Downloading /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0717 19:09:04.645974 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:04.645732 1081390 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa...
	I0717 19:09:04.803080 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:04.802944 1081390 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/multinode-464644.rawdisk...
	I0717 19:09:04.803127 1081367 main.go:141] libmachine: (multinode-464644) DBG | Writing magic tar header
	I0717 19:09:04.803147 1081367 main.go:141] libmachine: (multinode-464644) DBG | Writing SSH key tar header
	I0717 19:09:04.803702 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:04.803591 1081390 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644 ...
	I0717 19:09:04.803744 1081367 main.go:141] libmachine: (multinode-464644) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644 (perms=drwx------)
	I0717 19:09:04.803753 1081367 main.go:141] libmachine: (multinode-464644) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644
	I0717 19:09:04.803764 1081367 main.go:141] libmachine: (multinode-464644) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines
	I0717 19:09:04.803772 1081367 main.go:141] libmachine: (multinode-464644) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:09:04.803784 1081367 main.go:141] libmachine: (multinode-464644) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725
	I0717 19:09:04.803794 1081367 main.go:141] libmachine: (multinode-464644) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 19:09:04.803822 1081367 main.go:141] libmachine: (multinode-464644) DBG | Checking permissions on dir: /home/jenkins
	I0717 19:09:04.803833 1081367 main.go:141] libmachine: (multinode-464644) DBG | Checking permissions on dir: /home
	I0717 19:09:04.803841 1081367 main.go:141] libmachine: (multinode-464644) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines (perms=drwxr-xr-x)
	I0717 19:09:04.803850 1081367 main.go:141] libmachine: (multinode-464644) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube (perms=drwxr-xr-x)
	I0717 19:09:04.803857 1081367 main.go:141] libmachine: (multinode-464644) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725 (perms=drwxrwxr-x)
	I0717 19:09:04.803864 1081367 main.go:141] libmachine: (multinode-464644) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 19:09:04.803871 1081367 main.go:141] libmachine: (multinode-464644) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 19:09:04.803885 1081367 main.go:141] libmachine: (multinode-464644) Creating domain...
	I0717 19:09:04.803901 1081367 main.go:141] libmachine: (multinode-464644) DBG | Skipping /home - not owner
	I0717 19:09:04.805086 1081367 main.go:141] libmachine: (multinode-464644) define libvirt domain using xml: 
	I0717 19:09:04.805113 1081367 main.go:141] libmachine: (multinode-464644) <domain type='kvm'>
	I0717 19:09:04.805125 1081367 main.go:141] libmachine: (multinode-464644)   <name>multinode-464644</name>
	I0717 19:09:04.805140 1081367 main.go:141] libmachine: (multinode-464644)   <memory unit='MiB'>2200</memory>
	I0717 19:09:04.805182 1081367 main.go:141] libmachine: (multinode-464644)   <vcpu>2</vcpu>
	I0717 19:09:04.805209 1081367 main.go:141] libmachine: (multinode-464644)   <features>
	I0717 19:09:04.805221 1081367 main.go:141] libmachine: (multinode-464644)     <acpi/>
	I0717 19:09:04.805233 1081367 main.go:141] libmachine: (multinode-464644)     <apic/>
	I0717 19:09:04.805243 1081367 main.go:141] libmachine: (multinode-464644)     <pae/>
	I0717 19:09:04.805256 1081367 main.go:141] libmachine: (multinode-464644)     
	I0717 19:09:04.805276 1081367 main.go:141] libmachine: (multinode-464644)   </features>
	I0717 19:09:04.805291 1081367 main.go:141] libmachine: (multinode-464644)   <cpu mode='host-passthrough'>
	I0717 19:09:04.805304 1081367 main.go:141] libmachine: (multinode-464644)   
	I0717 19:09:04.805316 1081367 main.go:141] libmachine: (multinode-464644)   </cpu>
	I0717 19:09:04.805329 1081367 main.go:141] libmachine: (multinode-464644)   <os>
	I0717 19:09:04.805356 1081367 main.go:141] libmachine: (multinode-464644)     <type>hvm</type>
	I0717 19:09:04.805388 1081367 main.go:141] libmachine: (multinode-464644)     <boot dev='cdrom'/>
	I0717 19:09:04.805419 1081367 main.go:141] libmachine: (multinode-464644)     <boot dev='hd'/>
	I0717 19:09:04.805436 1081367 main.go:141] libmachine: (multinode-464644)     <bootmenu enable='no'/>
	I0717 19:09:04.805450 1081367 main.go:141] libmachine: (multinode-464644)   </os>
	I0717 19:09:04.805465 1081367 main.go:141] libmachine: (multinode-464644)   <devices>
	I0717 19:09:04.805479 1081367 main.go:141] libmachine: (multinode-464644)     <disk type='file' device='cdrom'>
	I0717 19:09:04.805498 1081367 main.go:141] libmachine: (multinode-464644)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/boot2docker.iso'/>
	I0717 19:09:04.805509 1081367 main.go:141] libmachine: (multinode-464644)       <target dev='hdc' bus='scsi'/>
	I0717 19:09:04.805524 1081367 main.go:141] libmachine: (multinode-464644)       <readonly/>
	I0717 19:09:04.805536 1081367 main.go:141] libmachine: (multinode-464644)     </disk>
	I0717 19:09:04.805591 1081367 main.go:141] libmachine: (multinode-464644)     <disk type='file' device='disk'>
	I0717 19:09:04.805619 1081367 main.go:141] libmachine: (multinode-464644)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 19:09:04.805639 1081367 main.go:141] libmachine: (multinode-464644)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/multinode-464644.rawdisk'/>
	I0717 19:09:04.805649 1081367 main.go:141] libmachine: (multinode-464644)       <target dev='hda' bus='virtio'/>
	I0717 19:09:04.805659 1081367 main.go:141] libmachine: (multinode-464644)     </disk>
	I0717 19:09:04.805680 1081367 main.go:141] libmachine: (multinode-464644)     <interface type='network'>
	I0717 19:09:04.805705 1081367 main.go:141] libmachine: (multinode-464644)       <source network='mk-multinode-464644'/>
	I0717 19:09:04.805725 1081367 main.go:141] libmachine: (multinode-464644)       <model type='virtio'/>
	I0717 19:09:04.805737 1081367 main.go:141] libmachine: (multinode-464644)     </interface>
	I0717 19:09:04.805750 1081367 main.go:141] libmachine: (multinode-464644)     <interface type='network'>
	I0717 19:09:04.805764 1081367 main.go:141] libmachine: (multinode-464644)       <source network='default'/>
	I0717 19:09:04.805776 1081367 main.go:141] libmachine: (multinode-464644)       <model type='virtio'/>
	I0717 19:09:04.805788 1081367 main.go:141] libmachine: (multinode-464644)     </interface>
	I0717 19:09:04.805803 1081367 main.go:141] libmachine: (multinode-464644)     <serial type='pty'>
	I0717 19:09:04.805818 1081367 main.go:141] libmachine: (multinode-464644)       <target port='0'/>
	I0717 19:09:04.805830 1081367 main.go:141] libmachine: (multinode-464644)     </serial>
	I0717 19:09:04.805844 1081367 main.go:141] libmachine: (multinode-464644)     <console type='pty'>
	I0717 19:09:04.805857 1081367 main.go:141] libmachine: (multinode-464644)       <target type='serial' port='0'/>
	I0717 19:09:04.805870 1081367 main.go:141] libmachine: (multinode-464644)     </console>
	I0717 19:09:04.805882 1081367 main.go:141] libmachine: (multinode-464644)     <rng model='virtio'>
	I0717 19:09:04.805903 1081367 main.go:141] libmachine: (multinode-464644)       <backend model='random'>/dev/random</backend>
	I0717 19:09:04.805921 1081367 main.go:141] libmachine: (multinode-464644)     </rng>
	I0717 19:09:04.805936 1081367 main.go:141] libmachine: (multinode-464644)     
	I0717 19:09:04.805954 1081367 main.go:141] libmachine: (multinode-464644)     
	I0717 19:09:04.805967 1081367 main.go:141] libmachine: (multinode-464644)   </devices>
	I0717 19:09:04.805978 1081367 main.go:141] libmachine: (multinode-464644) </domain>
	I0717 19:09:04.805991 1081367 main.go:141] libmachine: (multinode-464644) 
	I0717 19:09:04.810924 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:41:8c:58 in network default
	I0717 19:09:04.811616 1081367 main.go:141] libmachine: (multinode-464644) Ensuring networks are active...
	I0717 19:09:04.811647 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:04.812479 1081367 main.go:141] libmachine: (multinode-464644) Ensuring network default is active
	I0717 19:09:04.812744 1081367 main.go:141] libmachine: (multinode-464644) Ensuring network mk-multinode-464644 is active
	I0717 19:09:04.813320 1081367 main.go:141] libmachine: (multinode-464644) Getting domain xml...
	I0717 19:09:04.814169 1081367 main.go:141] libmachine: (multinode-464644) Creating domain...
	I0717 19:09:06.095423 1081367 main.go:141] libmachine: (multinode-464644) Waiting to get IP...
	I0717 19:09:06.096357 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:06.096670 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:06.096720 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:06.096678 1081390 retry.go:31] will retry after 302.034507ms: waiting for machine to come up
	I0717 19:09:06.400294 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:06.400912 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:06.400938 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:06.400843 1081390 retry.go:31] will retry after 306.989862ms: waiting for machine to come up
	I0717 19:09:06.709629 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:06.710146 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:06.710196 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:06.710118 1081390 retry.go:31] will retry after 305.215924ms: waiting for machine to come up
	I0717 19:09:07.016808 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:07.017234 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:07.017270 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:07.017179 1081390 retry.go:31] will retry after 491.90812ms: waiting for machine to come up
	I0717 19:09:07.511110 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:07.511572 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:07.511608 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:07.511511 1081390 retry.go:31] will retry after 728.842193ms: waiting for machine to come up
	I0717 19:09:08.241661 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:08.242152 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:08.242189 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:08.242079 1081390 retry.go:31] will retry after 742.884273ms: waiting for machine to come up
	I0717 19:09:08.986158 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:08.986648 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:08.986681 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:08.986588 1081390 retry.go:31] will retry after 1.152063473s: waiting for machine to come up
	I0717 19:09:10.140790 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:10.141295 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:10.141324 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:10.141234 1081390 retry.go:31] will retry after 1.286100241s: waiting for machine to come up
	I0717 19:09:11.429895 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:11.430313 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:11.430348 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:11.430258 1081390 retry.go:31] will retry after 1.313784651s: waiting for machine to come up
	I0717 19:09:12.745517 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:12.745928 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:12.745951 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:12.745897 1081390 retry.go:31] will retry after 2.32132245s: waiting for machine to come up
	I0717 19:09:15.068435 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:15.069124 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:15.069152 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:15.069058 1081390 retry.go:31] will retry after 1.852769716s: waiting for machine to come up
	I0717 19:09:16.924307 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:16.924885 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:16.924917 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:16.924841 1081390 retry.go:31] will retry after 3.040703776s: waiting for machine to come up
	I0717 19:09:19.967772 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:19.968233 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:19.968267 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:19.968184 1081390 retry.go:31] will retry after 3.563809407s: waiting for machine to come up
	I0717 19:09:23.536172 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:23.536707 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:09:23.536733 1081367 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:09:23.536659 1081390 retry.go:31] will retry after 5.50805129s: waiting for machine to come up
	I0717 19:09:29.050627 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:29.051378 1081367 main.go:141] libmachine: (multinode-464644) Found IP for machine: 192.168.39.174
	I0717 19:09:29.051415 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has current primary IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:29.051490 1081367 main.go:141] libmachine: (multinode-464644) Reserving static IP address...
	I0717 19:09:29.051987 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find host DHCP lease matching {name: "multinode-464644", mac: "52:54:00:7b:06:f6", ip: "192.168.39.174"} in network mk-multinode-464644
	I0717 19:09:29.151028 1081367 main.go:141] libmachine: (multinode-464644) DBG | Getting to WaitForSSH function...
	I0717 19:09:29.151071 1081367 main.go:141] libmachine: (multinode-464644) Reserved static IP address: 192.168.39.174
	I0717 19:09:29.151132 1081367 main.go:141] libmachine: (multinode-464644) Waiting for SSH to be available...
	I0717 19:09:29.154500 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:29.154865 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644
	I0717 19:09:29.154902 1081367 main.go:141] libmachine: (multinode-464644) DBG | unable to find defined IP address of network mk-multinode-464644 interface with MAC address 52:54:00:7b:06:f6
	I0717 19:09:29.155181 1081367 main.go:141] libmachine: (multinode-464644) DBG | Using SSH client type: external
	I0717 19:09:29.155214 1081367 main.go:141] libmachine: (multinode-464644) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa (-rw-------)
	I0717 19:09:29.155255 1081367 main.go:141] libmachine: (multinode-464644) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:09:29.155275 1081367 main.go:141] libmachine: (multinode-464644) DBG | About to run SSH command:
	I0717 19:09:29.155300 1081367 main.go:141] libmachine: (multinode-464644) DBG | exit 0
	I0717 19:09:29.159255 1081367 main.go:141] libmachine: (multinode-464644) DBG | SSH cmd err, output: exit status 255: 
	I0717 19:09:29.159292 1081367 main.go:141] libmachine: (multinode-464644) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 19:09:29.159311 1081367 main.go:141] libmachine: (multinode-464644) DBG | command : exit 0
	I0717 19:09:29.159321 1081367 main.go:141] libmachine: (multinode-464644) DBG | err     : exit status 255
	I0717 19:09:29.159335 1081367 main.go:141] libmachine: (multinode-464644) DBG | output  : 
	I0717 19:09:32.161478 1081367 main.go:141] libmachine: (multinode-464644) DBG | Getting to WaitForSSH function...
	I0717 19:09:32.165004 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.165703 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:32.165743 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.165886 1081367 main.go:141] libmachine: (multinode-464644) DBG | Using SSH client type: external
	I0717 19:09:32.165919 1081367 main.go:141] libmachine: (multinode-464644) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa (-rw-------)
	I0717 19:09:32.165965 1081367 main.go:141] libmachine: (multinode-464644) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:09:32.165990 1081367 main.go:141] libmachine: (multinode-464644) DBG | About to run SSH command:
	I0717 19:09:32.166013 1081367 main.go:141] libmachine: (multinode-464644) DBG | exit 0
	I0717 19:09:32.258053 1081367 main.go:141] libmachine: (multinode-464644) DBG | SSH cmd err, output: <nil>: 
	I0717 19:09:32.258475 1081367 main.go:141] libmachine: (multinode-464644) KVM machine creation complete!
	I0717 19:09:32.258850 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetConfigRaw
	I0717 19:09:32.259459 1081367 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:09:32.259742 1081367 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:09:32.260001 1081367 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 19:09:32.260019 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetState
	I0717 19:09:32.261641 1081367 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 19:09:32.261659 1081367 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 19:09:32.261666 1081367 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 19:09:32.261673 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:09:32.264712 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.265186 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:32.265231 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.265577 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:09:32.265818 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:32.265982 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:32.266126 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:09:32.266374 1081367 main.go:141] libmachine: Using SSH client type: native
	I0717 19:09:32.266892 1081367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0717 19:09:32.266909 1081367 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 19:09:32.389411 1081367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:09:32.389434 1081367 main.go:141] libmachine: Detecting the provisioner...
	I0717 19:09:32.389446 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:09:32.392572 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.393067 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:32.393110 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.393353 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:09:32.393637 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:32.393871 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:32.394064 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:09:32.394284 1081367 main.go:141] libmachine: Using SSH client type: native
	I0717 19:09:32.394747 1081367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0717 19:09:32.394765 1081367 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 19:09:32.518937 1081367 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0717 19:09:32.519044 1081367 main.go:141] libmachine: found compatible host: buildroot
	I0717 19:09:32.519056 1081367 main.go:141] libmachine: Provisioning with buildroot...
	I0717 19:09:32.519068 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetMachineName
	I0717 19:09:32.519441 1081367 buildroot.go:166] provisioning hostname "multinode-464644"
	I0717 19:09:32.519476 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetMachineName
	I0717 19:09:32.519816 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:09:32.522713 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.523246 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:32.523289 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.523410 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:09:32.523659 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:32.523856 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:32.524079 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:09:32.524291 1081367 main.go:141] libmachine: Using SSH client type: native
	I0717 19:09:32.524706 1081367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0717 19:09:32.524725 1081367 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-464644 && echo "multinode-464644" | sudo tee /etc/hostname
	I0717 19:09:32.661210 1081367 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-464644
	
	I0717 19:09:32.661250 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:09:32.664745 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.665143 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:32.665177 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.665373 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:09:32.665639 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:32.665842 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:32.666044 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:09:32.666247 1081367 main.go:141] libmachine: Using SSH client type: native
	I0717 19:09:32.666660 1081367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0717 19:09:32.666677 1081367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-464644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-464644/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-464644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:09:32.799956 1081367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:09:32.800000 1081367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:09:32.800022 1081367 buildroot.go:174] setting up certificates
	I0717 19:09:32.800032 1081367 provision.go:83] configureAuth start
	I0717 19:09:32.800042 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetMachineName
	I0717 19:09:32.800416 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetIP
	I0717 19:09:32.803847 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.804492 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:32.804542 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.804899 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:09:32.807766 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.808189 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:32.808242 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.808372 1081367 provision.go:138] copyHostCerts
	I0717 19:09:32.808404 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:09:32.808437 1081367 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:09:32.808453 1081367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:09:32.808552 1081367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:09:32.808690 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:09:32.808720 1081367 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:09:32.808727 1081367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:09:32.808758 1081367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:09:32.808820 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:09:32.808838 1081367 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:09:32.808841 1081367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:09:32.808863 1081367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:09:32.808930 1081367 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.multinode-464644 san=[192.168.39.174 192.168.39.174 localhost 127.0.0.1 minikube multinode-464644]
	I0717 19:09:32.871141 1081367 provision.go:172] copyRemoteCerts
	I0717 19:09:32.871221 1081367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:09:32.871251 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:09:32.874824 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.875303 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:32.875342 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:32.875587 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:09:32.875929 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:32.876197 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:09:32.876417 1081367 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:09:32.967552 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 19:09:32.967662 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:09:32.994945 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 19:09:32.995047 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 19:09:33.020896 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 19:09:33.020991 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:09:33.045589 1081367 provision.go:86] duration metric: configureAuth took 245.540241ms
	I0717 19:09:33.045625 1081367 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:09:33.045851 1081367 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:09:33.045960 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:09:33.049116 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.049683 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:33.049712 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.049911 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:09:33.050147 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:33.050319 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:33.050518 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:09:33.050749 1081367 main.go:141] libmachine: Using SSH client type: native
	I0717 19:09:33.051164 1081367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0717 19:09:33.051183 1081367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:09:33.387434 1081367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:09:33.387473 1081367 main.go:141] libmachine: Checking connection to Docker...
	I0717 19:09:33.387482 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetURL
	I0717 19:09:33.388930 1081367 main.go:141] libmachine: (multinode-464644) DBG | Using libvirt version 6000000
	I0717 19:09:33.391744 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.392300 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:33.392337 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.392574 1081367 main.go:141] libmachine: Docker is up and running!
	I0717 19:09:33.392596 1081367 main.go:141] libmachine: Reticulating splines...
	I0717 19:09:33.392605 1081367 client.go:171] LocalClient.Create took 29.075255315s
	I0717 19:09:33.392635 1081367 start.go:167] duration metric: libmachine.API.Create for "multinode-464644" took 29.075332422s
	I0717 19:09:33.392672 1081367 start.go:300] post-start starting for "multinode-464644" (driver="kvm2")
	I0717 19:09:33.392689 1081367 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:09:33.392716 1081367 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:09:33.393029 1081367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:09:33.393068 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:09:33.395963 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.396332 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:33.396360 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.396570 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:09:33.396812 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:33.396997 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:09:33.397154 1081367 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:09:33.488584 1081367 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:09:33.493089 1081367 command_runner.go:130] > NAME=Buildroot
	I0717 19:09:33.493124 1081367 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0717 19:09:33.493130 1081367 command_runner.go:130] > ID=buildroot
	I0717 19:09:33.493136 1081367 command_runner.go:130] > VERSION_ID=2021.02.12
	I0717 19:09:33.493162 1081367 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0717 19:09:33.493233 1081367 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:09:33.493259 1081367 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:09:33.493334 1081367 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:09:33.493425 1081367 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:09:33.493437 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> /etc/ssl/certs/10689542.pem
	I0717 19:09:33.493528 1081367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:09:33.503178 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:09:33.529139 1081367 start.go:303] post-start completed in 136.435827ms
	I0717 19:09:33.529198 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetConfigRaw
	I0717 19:09:33.529913 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetIP
	I0717 19:09:33.533480 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.533951 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:33.533997 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.534293 1081367 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/config.json ...
	I0717 19:09:33.534505 1081367 start.go:128] duration metric: createHost completed in 29.237358177s
	I0717 19:09:33.534559 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:09:33.537139 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.537498 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:33.537541 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.537716 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:09:33.537956 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:33.538115 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:33.538297 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:09:33.538449 1081367 main.go:141] libmachine: Using SSH client type: native
	I0717 19:09:33.538860 1081367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0717 19:09:33.538873 1081367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:09:33.658868 1081367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689620973.646430859
	
	I0717 19:09:33.658900 1081367 fix.go:206] guest clock: 1689620973.646430859
	I0717 19:09:33.658908 1081367 fix.go:219] Guest: 2023-07-17 19:09:33.646430859 +0000 UTC Remote: 2023-07-17 19:09:33.534542942 +0000 UTC m=+29.357110706 (delta=111.887917ms)
	I0717 19:09:33.658947 1081367 fix.go:190] guest clock delta is within tolerance: 111.887917ms
	I0717 19:09:33.658954 1081367 start.go:83] releasing machines lock for "multinode-464644", held for 29.36195255s
	I0717 19:09:33.658980 1081367 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:09:33.659333 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetIP
	I0717 19:09:33.662289 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.662662 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:33.662708 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.662956 1081367 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:09:33.663699 1081367 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:09:33.663929 1081367 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:09:33.664015 1081367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:09:33.664082 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:09:33.664209 1081367 ssh_runner.go:195] Run: cat /version.json
	I0717 19:09:33.664228 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:09:33.667088 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.667417 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.667513 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:33.667562 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.667709 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:09:33.667894 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:33.667922 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:33.667963 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:33.668105 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:09:33.668106 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:09:33.668331 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:09:33.668356 1081367 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:09:33.668506 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:09:33.668669 1081367 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:09:33.780909 1081367 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 19:09:33.781981 1081367 command_runner.go:130] > {"iso_version": "v1.31.0", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "be0194f682c2c37366eacb8c13503cb6c7a41cf8"}
	W0717 19:09:33.782154 1081367 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:09:33.782232 1081367 ssh_runner.go:195] Run: systemctl --version
	I0717 19:09:33.788651 1081367 command_runner.go:130] > systemd 247 (247)
	I0717 19:09:33.788696 1081367 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0717 19:09:33.788774 1081367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:09:33.961581 1081367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:09:33.967845 1081367 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 19:09:33.967913 1081367 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:09:33.967986 1081367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:09:33.985101 1081367 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0717 19:09:33.985185 1081367 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:09:33.985195 1081367 start.go:469] detecting cgroup driver to use...
	I0717 19:09:33.985277 1081367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:09:34.001635 1081367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:09:34.016497 1081367 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:09:34.016582 1081367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:09:34.032373 1081367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:09:34.048065 1081367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:09:34.063936 1081367 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0717 19:09:34.163128 1081367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:09:34.178292 1081367 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0717 19:09:34.289726 1081367 docker.go:212] disabling docker service ...
	I0717 19:09:34.289805 1081367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:09:34.305080 1081367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:09:34.318323 1081367 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0717 19:09:34.318640 1081367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:09:34.437915 1081367 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0717 19:09:34.438029 1081367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:09:34.549830 1081367 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0717 19:09:34.549865 1081367 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0717 19:09:34.549937 1081367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:09:34.564397 1081367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:09:34.584163 1081367 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 19:09:34.584269 1081367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:09:34.584344 1081367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:09:34.595381 1081367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:09:34.595483 1081367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:09:34.606958 1081367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:09:34.618088 1081367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:09:34.629257 1081367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:09:34.640737 1081367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:09:34.650124 1081367 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:09:34.650485 1081367 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:09:34.650574 1081367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:09:34.665041 1081367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:09:34.675521 1081367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:09:34.793044 1081367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:09:34.967466 1081367 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:09:34.967569 1081367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:09:34.973194 1081367 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 19:09:34.973219 1081367 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 19:09:34.973226 1081367 command_runner.go:130] > Device: 16h/22d	Inode: 759         Links: 1
	I0717 19:09:34.973237 1081367 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:09:34.973245 1081367 command_runner.go:130] > Access: 2023-07-17 19:09:34.941596603 +0000
	I0717 19:09:34.973254 1081367 command_runner.go:130] > Modify: 2023-07-17 19:09:34.941596603 +0000
	I0717 19:09:34.973262 1081367 command_runner.go:130] > Change: 2023-07-17 19:09:34.941596603 +0000
	I0717 19:09:34.973269 1081367 command_runner.go:130] >  Birth: -
	I0717 19:09:34.973294 1081367 start.go:537] Will wait 60s for crictl version
	I0717 19:09:34.973362 1081367 ssh_runner.go:195] Run: which crictl
	I0717 19:09:34.977732 1081367 command_runner.go:130] > /usr/bin/crictl
	I0717 19:09:34.977833 1081367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:09:35.009249 1081367 command_runner.go:130] > Version:  0.1.0
	I0717 19:09:35.009304 1081367 command_runner.go:130] > RuntimeName:  cri-o
	I0717 19:09:35.009311 1081367 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0717 19:09:35.009321 1081367 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0717 19:09:35.011367 1081367 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:09:35.011461 1081367 ssh_runner.go:195] Run: crio --version
	I0717 19:09:35.062939 1081367 command_runner.go:130] > crio version 1.24.1
	I0717 19:09:35.062968 1081367 command_runner.go:130] > Version:          1.24.1
	I0717 19:09:35.062980 1081367 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 19:09:35.062985 1081367 command_runner.go:130] > GitTreeState:     dirty
	I0717 19:09:35.062990 1081367 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 19:09:35.062997 1081367 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 19:09:35.063004 1081367 command_runner.go:130] > Compiler:         gc
	I0717 19:09:35.063012 1081367 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:09:35.063020 1081367 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:09:35.063033 1081367 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:09:35.063040 1081367 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:09:35.063047 1081367 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:09:35.064414 1081367 ssh_runner.go:195] Run: crio --version
	I0717 19:09:35.112814 1081367 command_runner.go:130] > crio version 1.24.1
	I0717 19:09:35.112847 1081367 command_runner.go:130] > Version:          1.24.1
	I0717 19:09:35.112860 1081367 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 19:09:35.112869 1081367 command_runner.go:130] > GitTreeState:     dirty
	I0717 19:09:35.112880 1081367 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 19:09:35.112890 1081367 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 19:09:35.112903 1081367 command_runner.go:130] > Compiler:         gc
	I0717 19:09:35.112909 1081367 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:09:35.112916 1081367 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:09:35.112924 1081367 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:09:35.112928 1081367 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:09:35.112932 1081367 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:09:35.118276 1081367 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:09:35.120212 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetIP
	I0717 19:09:35.123397 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:35.123903 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:09:35.123930 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:09:35.124247 1081367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:09:35.129217 1081367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:09:35.142723 1081367 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:09:35.142816 1081367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:09:35.171714 1081367 command_runner.go:130] > {
	I0717 19:09:35.171743 1081367 command_runner.go:130] >   "images": [
	I0717 19:09:35.171749 1081367 command_runner.go:130] >   ]
	I0717 19:09:35.171753 1081367 command_runner.go:130] > }
	I0717 19:09:35.173137 1081367 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:09:35.173205 1081367 ssh_runner.go:195] Run: which lz4
	I0717 19:09:35.177317 1081367 command_runner.go:130] > /usr/bin/lz4
	I0717 19:09:35.177360 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 19:09:35.177442 1081367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:09:35.182198 1081367 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:09:35.182281 1081367 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:09:35.182306 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 19:09:37.010135 1081367 crio.go:444] Took 1.832714 seconds to copy over tarball
	I0717 19:09:37.010230 1081367 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:09:40.124821 1081367 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.114551749s)
	I0717 19:09:40.124866 1081367 crio.go:451] Took 3.114701 seconds to extract the tarball
	I0717 19:09:40.124880 1081367 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:09:40.166128 1081367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:09:40.233136 1081367 command_runner.go:130] > {
	I0717 19:09:40.233161 1081367 command_runner.go:130] >   "images": [
	I0717 19:09:40.233165 1081367 command_runner.go:130] >     {
	I0717 19:09:40.233177 1081367 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0717 19:09:40.233182 1081367 command_runner.go:130] >       "repoTags": [
	I0717 19:09:40.233195 1081367 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0717 19:09:40.233199 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233203 1081367 command_runner.go:130] >       "repoDigests": [
	I0717 19:09:40.233213 1081367 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0717 19:09:40.233220 1081367 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0717 19:09:40.233223 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233228 1081367 command_runner.go:130] >       "size": "65249302",
	I0717 19:09:40.233232 1081367 command_runner.go:130] >       "uid": null,
	I0717 19:09:40.233236 1081367 command_runner.go:130] >       "username": "",
	I0717 19:09:40.233244 1081367 command_runner.go:130] >       "spec": null
	I0717 19:09:40.233247 1081367 command_runner.go:130] >     },
	I0717 19:09:40.233251 1081367 command_runner.go:130] >     {
	I0717 19:09:40.233257 1081367 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 19:09:40.233262 1081367 command_runner.go:130] >       "repoTags": [
	I0717 19:09:40.233269 1081367 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 19:09:40.233273 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233281 1081367 command_runner.go:130] >       "repoDigests": [
	I0717 19:09:40.233290 1081367 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 19:09:40.233298 1081367 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 19:09:40.233304 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233308 1081367 command_runner.go:130] >       "size": "31470524",
	I0717 19:09:40.233312 1081367 command_runner.go:130] >       "uid": null,
	I0717 19:09:40.233321 1081367 command_runner.go:130] >       "username": "",
	I0717 19:09:40.233325 1081367 command_runner.go:130] >       "spec": null
	I0717 19:09:40.233330 1081367 command_runner.go:130] >     },
	I0717 19:09:40.233333 1081367 command_runner.go:130] >     {
	I0717 19:09:40.233339 1081367 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0717 19:09:40.233346 1081367 command_runner.go:130] >       "repoTags": [
	I0717 19:09:40.233350 1081367 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0717 19:09:40.233355 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233359 1081367 command_runner.go:130] >       "repoDigests": [
	I0717 19:09:40.233367 1081367 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0717 19:09:40.233376 1081367 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0717 19:09:40.233385 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233393 1081367 command_runner.go:130] >       "size": "53621675",
	I0717 19:09:40.233399 1081367 command_runner.go:130] >       "uid": null,
	I0717 19:09:40.233405 1081367 command_runner.go:130] >       "username": "",
	I0717 19:09:40.233412 1081367 command_runner.go:130] >       "spec": null
	I0717 19:09:40.233415 1081367 command_runner.go:130] >     },
	I0717 19:09:40.233419 1081367 command_runner.go:130] >     {
	I0717 19:09:40.233426 1081367 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0717 19:09:40.233430 1081367 command_runner.go:130] >       "repoTags": [
	I0717 19:09:40.233436 1081367 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0717 19:09:40.233440 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233444 1081367 command_runner.go:130] >       "repoDigests": [
	I0717 19:09:40.233453 1081367 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0717 19:09:40.233460 1081367 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0717 19:09:40.233464 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233468 1081367 command_runner.go:130] >       "size": "297083935",
	I0717 19:09:40.233473 1081367 command_runner.go:130] >       "uid": {
	I0717 19:09:40.233477 1081367 command_runner.go:130] >         "value": "0"
	I0717 19:09:40.233490 1081367 command_runner.go:130] >       },
	I0717 19:09:40.233497 1081367 command_runner.go:130] >       "username": "",
	I0717 19:09:40.233504 1081367 command_runner.go:130] >       "spec": null
	I0717 19:09:40.233507 1081367 command_runner.go:130] >     },
	I0717 19:09:40.233513 1081367 command_runner.go:130] >     {
	I0717 19:09:40.233520 1081367 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0717 19:09:40.233526 1081367 command_runner.go:130] >       "repoTags": [
	I0717 19:09:40.233531 1081367 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0717 19:09:40.233537 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233541 1081367 command_runner.go:130] >       "repoDigests": [
	I0717 19:09:40.233551 1081367 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0717 19:09:40.233573 1081367 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0717 19:09:40.233579 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233583 1081367 command_runner.go:130] >       "size": "122065872",
	I0717 19:09:40.233587 1081367 command_runner.go:130] >       "uid": {
	I0717 19:09:40.233595 1081367 command_runner.go:130] >         "value": "0"
	I0717 19:09:40.233599 1081367 command_runner.go:130] >       },
	I0717 19:09:40.233605 1081367 command_runner.go:130] >       "username": "",
	I0717 19:09:40.233609 1081367 command_runner.go:130] >       "spec": null
	I0717 19:09:40.233615 1081367 command_runner.go:130] >     },
	I0717 19:09:40.233620 1081367 command_runner.go:130] >     {
	I0717 19:09:40.233627 1081367 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0717 19:09:40.233636 1081367 command_runner.go:130] >       "repoTags": [
	I0717 19:09:40.233647 1081367 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0717 19:09:40.233655 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233664 1081367 command_runner.go:130] >       "repoDigests": [
	I0717 19:09:40.233679 1081367 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0717 19:09:40.233690 1081367 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0717 19:09:40.233697 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233701 1081367 command_runner.go:130] >       "size": "113919286",
	I0717 19:09:40.233707 1081367 command_runner.go:130] >       "uid": {
	I0717 19:09:40.233712 1081367 command_runner.go:130] >         "value": "0"
	I0717 19:09:40.233718 1081367 command_runner.go:130] >       },
	I0717 19:09:40.233722 1081367 command_runner.go:130] >       "username": "",
	I0717 19:09:40.233729 1081367 command_runner.go:130] >       "spec": null
	I0717 19:09:40.233733 1081367 command_runner.go:130] >     },
	I0717 19:09:40.233739 1081367 command_runner.go:130] >     {
	I0717 19:09:40.233749 1081367 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0717 19:09:40.233755 1081367 command_runner.go:130] >       "repoTags": [
	I0717 19:09:40.233761 1081367 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0717 19:09:40.233767 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233771 1081367 command_runner.go:130] >       "repoDigests": [
	I0717 19:09:40.233780 1081367 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0717 19:09:40.233787 1081367 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0717 19:09:40.233793 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233798 1081367 command_runner.go:130] >       "size": "72713623",
	I0717 19:09:40.233804 1081367 command_runner.go:130] >       "uid": null,
	I0717 19:09:40.233808 1081367 command_runner.go:130] >       "username": "",
	I0717 19:09:40.233815 1081367 command_runner.go:130] >       "spec": null
	I0717 19:09:40.233819 1081367 command_runner.go:130] >     },
	I0717 19:09:40.233824 1081367 command_runner.go:130] >     {
	I0717 19:09:40.233831 1081367 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0717 19:09:40.233837 1081367 command_runner.go:130] >       "repoTags": [
	I0717 19:09:40.233842 1081367 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0717 19:09:40.233848 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233852 1081367 command_runner.go:130] >       "repoDigests": [
	I0717 19:09:40.233862 1081367 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0717 19:09:40.233949 1081367 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0717 19:09:40.233965 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.233969 1081367 command_runner.go:130] >       "size": "59811126",
	I0717 19:09:40.233973 1081367 command_runner.go:130] >       "uid": {
	I0717 19:09:40.233977 1081367 command_runner.go:130] >         "value": "0"
	I0717 19:09:40.233980 1081367 command_runner.go:130] >       },
	I0717 19:09:40.233984 1081367 command_runner.go:130] >       "username": "",
	I0717 19:09:40.233988 1081367 command_runner.go:130] >       "spec": null
	I0717 19:09:40.233991 1081367 command_runner.go:130] >     },
	I0717 19:09:40.233995 1081367 command_runner.go:130] >     {
	I0717 19:09:40.234006 1081367 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 19:09:40.234010 1081367 command_runner.go:130] >       "repoTags": [
	I0717 19:09:40.234015 1081367 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 19:09:40.234018 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.234022 1081367 command_runner.go:130] >       "repoDigests": [
	I0717 19:09:40.234029 1081367 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 19:09:40.234039 1081367 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 19:09:40.234046 1081367 command_runner.go:130] >       ],
	I0717 19:09:40.234050 1081367 command_runner.go:130] >       "size": "750414",
	I0717 19:09:40.234054 1081367 command_runner.go:130] >       "uid": {
	I0717 19:09:40.234061 1081367 command_runner.go:130] >         "value": "65535"
	I0717 19:09:40.234065 1081367 command_runner.go:130] >       },
	I0717 19:09:40.234072 1081367 command_runner.go:130] >       "username": "",
	I0717 19:09:40.234076 1081367 command_runner.go:130] >       "spec": null
	I0717 19:09:40.234081 1081367 command_runner.go:130] >     }
	I0717 19:09:40.234085 1081367 command_runner.go:130] >   ]
	I0717 19:09:40.234088 1081367 command_runner.go:130] > }
	I0717 19:09:40.234681 1081367 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:09:40.234705 1081367 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:09:40.234769 1081367 ssh_runner.go:195] Run: crio config
	I0717 19:09:40.295544 1081367 command_runner.go:130] ! time="2023-07-17 19:09:40.288466119Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0717 19:09:40.295582 1081367 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 19:09:40.300886 1081367 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 19:09:40.300911 1081367 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 19:09:40.300917 1081367 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 19:09:40.300921 1081367 command_runner.go:130] > #
	I0717 19:09:40.300927 1081367 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 19:09:40.300933 1081367 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 19:09:40.300939 1081367 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 19:09:40.300954 1081367 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 19:09:40.300960 1081367 command_runner.go:130] > # reload'.
	I0717 19:09:40.300966 1081367 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 19:09:40.300972 1081367 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 19:09:40.300978 1081367 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 19:09:40.300984 1081367 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 19:09:40.300988 1081367 command_runner.go:130] > [crio]
	I0717 19:09:40.300995 1081367 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 19:09:40.301000 1081367 command_runner.go:130] > # containers images, in this directory.
	I0717 19:09:40.301007 1081367 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 19:09:40.301019 1081367 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 19:09:40.301026 1081367 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 19:09:40.301032 1081367 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 19:09:40.301039 1081367 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 19:09:40.301043 1081367 command_runner.go:130] > storage_driver = "overlay"
	I0717 19:09:40.301049 1081367 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 19:09:40.301062 1081367 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 19:09:40.301069 1081367 command_runner.go:130] > storage_option = [
	I0717 19:09:40.301077 1081367 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 19:09:40.301087 1081367 command_runner.go:130] > ]
	I0717 19:09:40.301099 1081367 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 19:09:40.301111 1081367 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 19:09:40.301122 1081367 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 19:09:40.301135 1081367 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 19:09:40.301147 1081367 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 19:09:40.301158 1081367 command_runner.go:130] > # always happen on a node reboot
	I0717 19:09:40.301171 1081367 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 19:09:40.301183 1081367 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 19:09:40.301196 1081367 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 19:09:40.301214 1081367 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 19:09:40.301225 1081367 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 19:09:40.301240 1081367 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 19:09:40.301254 1081367 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 19:09:40.301263 1081367 command_runner.go:130] > # internal_wipe = true
	I0717 19:09:40.301275 1081367 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 19:09:40.301288 1081367 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 19:09:40.301299 1081367 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 19:09:40.301310 1081367 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 19:09:40.301323 1081367 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 19:09:40.301332 1081367 command_runner.go:130] > [crio.api]
	I0717 19:09:40.301341 1081367 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 19:09:40.301352 1081367 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 19:09:40.301363 1081367 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 19:09:40.301373 1081367 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 19:09:40.301388 1081367 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 19:09:40.301399 1081367 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 19:09:40.301409 1081367 command_runner.go:130] > # stream_port = "0"
	I0717 19:09:40.301420 1081367 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 19:09:40.301430 1081367 command_runner.go:130] > # stream_enable_tls = false
	I0717 19:09:40.301449 1081367 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 19:09:40.301465 1081367 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 19:09:40.301492 1081367 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 19:09:40.301502 1081367 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 19:09:40.301506 1081367 command_runner.go:130] > # minutes.
	I0717 19:09:40.301510 1081367 command_runner.go:130] > # stream_tls_cert = ""
	I0717 19:09:40.301518 1081367 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 19:09:40.301524 1081367 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 19:09:40.301531 1081367 command_runner.go:130] > # stream_tls_key = ""
	I0717 19:09:40.301537 1081367 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 19:09:40.301545 1081367 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 19:09:40.301569 1081367 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 19:09:40.301579 1081367 command_runner.go:130] > # stream_tls_ca = ""
	I0717 19:09:40.301592 1081367 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:09:40.301602 1081367 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 19:09:40.301612 1081367 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:09:40.301619 1081367 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 19:09:40.301679 1081367 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 19:09:40.301694 1081367 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 19:09:40.301698 1081367 command_runner.go:130] > [crio.runtime]
	I0717 19:09:40.301704 1081367 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 19:09:40.301709 1081367 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 19:09:40.301715 1081367 command_runner.go:130] > # "nofile=1024:2048"
	I0717 19:09:40.301721 1081367 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 19:09:40.301728 1081367 command_runner.go:130] > # default_ulimits = [
	I0717 19:09:40.301732 1081367 command_runner.go:130] > # ]
	I0717 19:09:40.301740 1081367 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 19:09:40.301746 1081367 command_runner.go:130] > # no_pivot = false
	I0717 19:09:40.301752 1081367 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 19:09:40.301761 1081367 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 19:09:40.301768 1081367 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 19:09:40.301774 1081367 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 19:09:40.301781 1081367 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 19:09:40.301788 1081367 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:09:40.301796 1081367 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 19:09:40.301800 1081367 command_runner.go:130] > # Cgroup setting for conmon
	I0717 19:09:40.301813 1081367 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 19:09:40.301821 1081367 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 19:09:40.301831 1081367 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 19:09:40.301838 1081367 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 19:09:40.301844 1081367 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:09:40.301850 1081367 command_runner.go:130] > conmon_env = [
	I0717 19:09:40.301858 1081367 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 19:09:40.301864 1081367 command_runner.go:130] > ]
	I0717 19:09:40.301870 1081367 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 19:09:40.301877 1081367 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 19:09:40.301886 1081367 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 19:09:40.301895 1081367 command_runner.go:130] > # default_env = [
	I0717 19:09:40.301904 1081367 command_runner.go:130] > # ]
	I0717 19:09:40.301916 1081367 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 19:09:40.301925 1081367 command_runner.go:130] > # selinux = false
	I0717 19:09:40.301937 1081367 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 19:09:40.301950 1081367 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 19:09:40.301962 1081367 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 19:09:40.301973 1081367 command_runner.go:130] > # seccomp_profile = ""
	I0717 19:09:40.301986 1081367 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 19:09:40.301998 1081367 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 19:09:40.302010 1081367 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 19:09:40.302020 1081367 command_runner.go:130] > # which might increase security.
	I0717 19:09:40.302027 1081367 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 19:09:40.302034 1081367 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 19:09:40.302042 1081367 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 19:09:40.302051 1081367 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 19:09:40.302059 1081367 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 19:09:40.302067 1081367 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:09:40.302071 1081367 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 19:09:40.302084 1081367 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 19:09:40.302091 1081367 command_runner.go:130] > # the cgroup blockio controller.
	I0717 19:09:40.302095 1081367 command_runner.go:130] > # blockio_config_file = ""
	I0717 19:09:40.302107 1081367 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 19:09:40.302113 1081367 command_runner.go:130] > # irqbalance daemon.
	I0717 19:09:40.302119 1081367 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 19:09:40.302128 1081367 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 19:09:40.302135 1081367 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:09:40.302140 1081367 command_runner.go:130] > # rdt_config_file = ""
	I0717 19:09:40.302152 1081367 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 19:09:40.302162 1081367 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 19:09:40.302172 1081367 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 19:09:40.302184 1081367 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 19:09:40.302193 1081367 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 19:09:40.302202 1081367 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 19:09:40.302208 1081367 command_runner.go:130] > # will be added.
	I0717 19:09:40.302212 1081367 command_runner.go:130] > # default_capabilities = [
	I0717 19:09:40.302219 1081367 command_runner.go:130] > # 	"CHOWN",
	I0717 19:09:40.302223 1081367 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 19:09:40.302229 1081367 command_runner.go:130] > # 	"FSETID",
	I0717 19:09:40.302234 1081367 command_runner.go:130] > # 	"FOWNER",
	I0717 19:09:40.302240 1081367 command_runner.go:130] > # 	"SETGID",
	I0717 19:09:40.302244 1081367 command_runner.go:130] > # 	"SETUID",
	I0717 19:09:40.302251 1081367 command_runner.go:130] > # 	"SETPCAP",
	I0717 19:09:40.302255 1081367 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 19:09:40.302261 1081367 command_runner.go:130] > # 	"KILL",
	I0717 19:09:40.302265 1081367 command_runner.go:130] > # ]
	I0717 19:09:40.302273 1081367 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 19:09:40.302279 1081367 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:09:40.302285 1081367 command_runner.go:130] > # default_sysctls = [
	I0717 19:09:40.302289 1081367 command_runner.go:130] > # ]
	I0717 19:09:40.302296 1081367 command_runner.go:130] > # List of devices on the host that a
	I0717 19:09:40.302302 1081367 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 19:09:40.302308 1081367 command_runner.go:130] > # allowed_devices = [
	I0717 19:09:40.302312 1081367 command_runner.go:130] > # 	"/dev/fuse",
	I0717 19:09:40.302318 1081367 command_runner.go:130] > # ]
	I0717 19:09:40.302323 1081367 command_runner.go:130] > # List of additional devices. specified as
	I0717 19:09:40.302333 1081367 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 19:09:40.302340 1081367 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 19:09:40.302369 1081367 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:09:40.302376 1081367 command_runner.go:130] > # additional_devices = [
	I0717 19:09:40.302379 1081367 command_runner.go:130] > # ]
	I0717 19:09:40.302384 1081367 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 19:09:40.302391 1081367 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 19:09:40.302395 1081367 command_runner.go:130] > # 	"/etc/cdi",
	I0717 19:09:40.302401 1081367 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 19:09:40.302405 1081367 command_runner.go:130] > # ]
	I0717 19:09:40.302413 1081367 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 19:09:40.302422 1081367 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 19:09:40.302429 1081367 command_runner.go:130] > # Defaults to false.
	I0717 19:09:40.302434 1081367 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 19:09:40.302442 1081367 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 19:09:40.302450 1081367 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 19:09:40.302454 1081367 command_runner.go:130] > # hooks_dir = [
	I0717 19:09:40.302459 1081367 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 19:09:40.302464 1081367 command_runner.go:130] > # ]
	I0717 19:09:40.302471 1081367 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 19:09:40.302484 1081367 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 19:09:40.302491 1081367 command_runner.go:130] > # its default mounts from the following two files:
	I0717 19:09:40.302494 1081367 command_runner.go:130] > #
	I0717 19:09:40.302502 1081367 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 19:09:40.302508 1081367 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 19:09:40.302516 1081367 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 19:09:40.302522 1081367 command_runner.go:130] > #
	I0717 19:09:40.302528 1081367 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 19:09:40.302537 1081367 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 19:09:40.302546 1081367 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 19:09:40.302553 1081367 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 19:09:40.302556 1081367 command_runner.go:130] > #
	I0717 19:09:40.302560 1081367 command_runner.go:130] > # default_mounts_file = ""
	I0717 19:09:40.302568 1081367 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 19:09:40.302574 1081367 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 19:09:40.302580 1081367 command_runner.go:130] > pids_limit = 1024
	I0717 19:09:40.302586 1081367 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 19:09:40.302595 1081367 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 19:09:40.302603 1081367 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 19:09:40.302613 1081367 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 19:09:40.302619 1081367 command_runner.go:130] > # log_size_max = -1
	I0717 19:09:40.302630 1081367 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 19:09:40.302639 1081367 command_runner.go:130] > # log_to_journald = false
	I0717 19:09:40.302659 1081367 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 19:09:40.302667 1081367 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 19:09:40.302672 1081367 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 19:09:40.302679 1081367 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 19:09:40.302686 1081367 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 19:09:40.302692 1081367 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 19:09:40.302698 1081367 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 19:09:40.302705 1081367 command_runner.go:130] > # read_only = false
	I0717 19:09:40.302711 1081367 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 19:09:40.302720 1081367 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 19:09:40.302727 1081367 command_runner.go:130] > # live configuration reload.
	I0717 19:09:40.302731 1081367 command_runner.go:130] > # log_level = "info"
	I0717 19:09:40.302739 1081367 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 19:09:40.302746 1081367 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:09:40.302750 1081367 command_runner.go:130] > # log_filter = ""
	I0717 19:09:40.302758 1081367 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 19:09:40.302764 1081367 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 19:09:40.302770 1081367 command_runner.go:130] > # separated by comma.
	I0717 19:09:40.302774 1081367 command_runner.go:130] > # uid_mappings = ""
	I0717 19:09:40.302782 1081367 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 19:09:40.302793 1081367 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 19:09:40.302805 1081367 command_runner.go:130] > # separated by comma.
	I0717 19:09:40.302811 1081367 command_runner.go:130] > # gid_mappings = ""
	I0717 19:09:40.302817 1081367 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 19:09:40.302825 1081367 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:09:40.302834 1081367 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:09:40.302840 1081367 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 19:09:40.302846 1081367 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 19:09:40.302854 1081367 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:09:40.302862 1081367 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:09:40.302870 1081367 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 19:09:40.302875 1081367 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 19:09:40.302884 1081367 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 19:09:40.302896 1081367 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 19:09:40.302906 1081367 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 19:09:40.302918 1081367 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 19:09:40.302930 1081367 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 19:09:40.302940 1081367 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 19:09:40.302951 1081367 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 19:09:40.302966 1081367 command_runner.go:130] > drop_infra_ctr = false
	I0717 19:09:40.302981 1081367 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 19:09:40.302993 1081367 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 19:09:40.303007 1081367 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 19:09:40.303014 1081367 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 19:09:40.303020 1081367 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 19:09:40.303027 1081367 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 19:09:40.303031 1081367 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 19:09:40.303042 1081367 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 19:09:40.303048 1081367 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 19:09:40.303055 1081367 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 19:09:40.303064 1081367 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 19:09:40.303073 1081367 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 19:09:40.303079 1081367 command_runner.go:130] > # default_runtime = "runc"
	I0717 19:09:40.303085 1081367 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 19:09:40.303094 1081367 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 19:09:40.303109 1081367 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 19:09:40.303125 1081367 command_runner.go:130] > # creation as a file is not desired either.
	I0717 19:09:40.303136 1081367 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 19:09:40.303143 1081367 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 19:09:40.303151 1081367 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 19:09:40.303154 1081367 command_runner.go:130] > # ]
	I0717 19:09:40.303163 1081367 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 19:09:40.303171 1081367 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 19:09:40.303179 1081367 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 19:09:40.303187 1081367 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 19:09:40.303191 1081367 command_runner.go:130] > #
	I0717 19:09:40.303196 1081367 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 19:09:40.303203 1081367 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 19:09:40.303207 1081367 command_runner.go:130] > #  runtime_type = "oci"
	I0717 19:09:40.303214 1081367 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 19:09:40.303219 1081367 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 19:09:40.303226 1081367 command_runner.go:130] > #  allowed_annotations = []
	I0717 19:09:40.303230 1081367 command_runner.go:130] > # Where:
	I0717 19:09:40.303238 1081367 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 19:09:40.303246 1081367 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 19:09:40.303254 1081367 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 19:09:40.303264 1081367 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 19:09:40.303271 1081367 command_runner.go:130] > #   in $PATH.
	I0717 19:09:40.303277 1081367 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 19:09:40.303284 1081367 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 19:09:40.303290 1081367 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 19:09:40.303295 1081367 command_runner.go:130] > #   state.
	I0717 19:09:40.303302 1081367 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 19:09:40.303310 1081367 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 19:09:40.303318 1081367 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 19:09:40.303326 1081367 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 19:09:40.303332 1081367 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 19:09:40.303342 1081367 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 19:09:40.303349 1081367 command_runner.go:130] > #   The currently recognized values are:
	I0717 19:09:40.303355 1081367 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 19:09:40.303364 1081367 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 19:09:40.303370 1081367 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 19:09:40.303380 1081367 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 19:09:40.303387 1081367 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 19:09:40.303396 1081367 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 19:09:40.303402 1081367 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 19:09:40.303410 1081367 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 19:09:40.303415 1081367 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 19:09:40.303422 1081367 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 19:09:40.303429 1081367 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 19:09:40.303435 1081367 command_runner.go:130] > runtime_type = "oci"
	I0717 19:09:40.303441 1081367 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 19:09:40.303447 1081367 command_runner.go:130] > runtime_config_path = ""
	I0717 19:09:40.303452 1081367 command_runner.go:130] > monitor_path = ""
	I0717 19:09:40.303458 1081367 command_runner.go:130] > monitor_cgroup = ""
	I0717 19:09:40.303462 1081367 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 19:09:40.303471 1081367 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 19:09:40.303475 1081367 command_runner.go:130] > # running containers
	I0717 19:09:40.303486 1081367 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 19:09:40.303492 1081367 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 19:09:40.303553 1081367 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 19:09:40.303569 1081367 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 19:09:40.303576 1081367 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 19:09:40.303581 1081367 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 19:09:40.303585 1081367 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 19:09:40.303592 1081367 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 19:09:40.303597 1081367 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 19:09:40.303604 1081367 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 19:09:40.303610 1081367 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 19:09:40.303617 1081367 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 19:09:40.303623 1081367 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 19:09:40.303634 1081367 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 19:09:40.303643 1081367 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 19:09:40.303651 1081367 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 19:09:40.303660 1081367 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 19:09:40.303670 1081367 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 19:09:40.303678 1081367 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 19:09:40.303685 1081367 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 19:09:40.303697 1081367 command_runner.go:130] > # Example:
	I0717 19:09:40.303705 1081367 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 19:09:40.303710 1081367 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 19:09:40.303717 1081367 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 19:09:40.303724 1081367 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 19:09:40.303728 1081367 command_runner.go:130] > # cpuset = 0
	I0717 19:09:40.303734 1081367 command_runner.go:130] > # cpushares = "0-1"
	I0717 19:09:40.303738 1081367 command_runner.go:130] > # Where:
	I0717 19:09:40.303745 1081367 command_runner.go:130] > # The workload name is workload-type.
	I0717 19:09:40.303752 1081367 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 19:09:40.303760 1081367 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 19:09:40.303767 1081367 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 19:09:40.303777 1081367 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 19:09:40.303783 1081367 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 19:09:40.303786 1081367 command_runner.go:130] > # 
	I0717 19:09:40.303792 1081367 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 19:09:40.303795 1081367 command_runner.go:130] > #
	I0717 19:09:40.303800 1081367 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 19:09:40.303806 1081367 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 19:09:40.303812 1081367 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 19:09:40.303818 1081367 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 19:09:40.303824 1081367 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 19:09:40.303827 1081367 command_runner.go:130] > [crio.image]
	I0717 19:09:40.303832 1081367 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 19:09:40.303837 1081367 command_runner.go:130] > # default_transport = "docker://"
	I0717 19:09:40.303842 1081367 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 19:09:40.303848 1081367 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:09:40.303852 1081367 command_runner.go:130] > # global_auth_file = ""
	I0717 19:09:40.303856 1081367 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 19:09:40.303861 1081367 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:09:40.303865 1081367 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 19:09:40.303871 1081367 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 19:09:40.303877 1081367 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:09:40.303884 1081367 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:09:40.303890 1081367 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 19:09:40.303902 1081367 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 19:09:40.303915 1081367 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 19:09:40.303931 1081367 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 19:09:40.303944 1081367 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 19:09:40.303954 1081367 command_runner.go:130] > # pause_command = "/pause"
	I0717 19:09:40.303967 1081367 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 19:09:40.303979 1081367 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 19:09:40.303991 1081367 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 19:09:40.304003 1081367 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 19:09:40.304016 1081367 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 19:09:40.304024 1081367 command_runner.go:130] > # signature_policy = ""
	I0717 19:09:40.304030 1081367 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 19:09:40.304039 1081367 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 19:09:40.304045 1081367 command_runner.go:130] > # changing them here.
	I0717 19:09:40.304050 1081367 command_runner.go:130] > # insecure_registries = [
	I0717 19:09:40.304055 1081367 command_runner.go:130] > # ]
	I0717 19:09:40.304070 1081367 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 19:09:40.304078 1081367 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 19:09:40.304083 1081367 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 19:09:40.304091 1081367 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 19:09:40.304096 1081367 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 19:09:40.304105 1081367 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 19:09:40.304111 1081367 command_runner.go:130] > # CNI plugins.
	I0717 19:09:40.304115 1081367 command_runner.go:130] > [crio.network]
	I0717 19:09:40.304123 1081367 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 19:09:40.304131 1081367 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 19:09:40.304137 1081367 command_runner.go:130] > # cni_default_network = ""
	I0717 19:09:40.304143 1081367 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 19:09:40.304150 1081367 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 19:09:40.304155 1081367 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 19:09:40.304162 1081367 command_runner.go:130] > # plugin_dirs = [
	I0717 19:09:40.304166 1081367 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 19:09:40.304172 1081367 command_runner.go:130] > # ]
	I0717 19:09:40.304177 1081367 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 19:09:40.304183 1081367 command_runner.go:130] > [crio.metrics]
	I0717 19:09:40.304188 1081367 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 19:09:40.304195 1081367 command_runner.go:130] > enable_metrics = true
	I0717 19:09:40.304199 1081367 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 19:09:40.304206 1081367 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 19:09:40.304212 1081367 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 19:09:40.304220 1081367 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 19:09:40.304228 1081367 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 19:09:40.304239 1081367 command_runner.go:130] > # metrics_collectors = [
	I0717 19:09:40.304246 1081367 command_runner.go:130] > # 	"operations",
	I0717 19:09:40.304251 1081367 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 19:09:40.304258 1081367 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 19:09:40.304262 1081367 command_runner.go:130] > # 	"operations_errors",
	I0717 19:09:40.304273 1081367 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 19:09:40.304282 1081367 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 19:09:40.304292 1081367 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 19:09:40.304301 1081367 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 19:09:40.304308 1081367 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 19:09:40.304312 1081367 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 19:09:40.304319 1081367 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 19:09:40.304323 1081367 command_runner.go:130] > # 	"containers_oom_total",
	I0717 19:09:40.304329 1081367 command_runner.go:130] > # 	"containers_oom",
	I0717 19:09:40.304334 1081367 command_runner.go:130] > # 	"processes_defunct",
	I0717 19:09:40.304341 1081367 command_runner.go:130] > # 	"operations_total",
	I0717 19:09:40.304346 1081367 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 19:09:40.304353 1081367 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 19:09:40.304358 1081367 command_runner.go:130] > # 	"operations_errors_total",
	I0717 19:09:40.304363 1081367 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 19:09:40.304367 1081367 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 19:09:40.304374 1081367 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 19:09:40.304378 1081367 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 19:09:40.304385 1081367 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 19:09:40.304389 1081367 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 19:09:40.304395 1081367 command_runner.go:130] > # ]
	I0717 19:09:40.304400 1081367 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 19:09:40.304406 1081367 command_runner.go:130] > # metrics_port = 9090
	I0717 19:09:40.304411 1081367 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 19:09:40.304418 1081367 command_runner.go:130] > # metrics_socket = ""
	I0717 19:09:40.304423 1081367 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 19:09:40.304431 1081367 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 19:09:40.304440 1081367 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 19:09:40.304447 1081367 command_runner.go:130] > # certificate on any modification event.
	I0717 19:09:40.304451 1081367 command_runner.go:130] > # metrics_cert = ""
	I0717 19:09:40.304458 1081367 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 19:09:40.304463 1081367 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 19:09:40.304469 1081367 command_runner.go:130] > # metrics_key = ""
	I0717 19:09:40.304474 1081367 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 19:09:40.304485 1081367 command_runner.go:130] > [crio.tracing]
	I0717 19:09:40.304490 1081367 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 19:09:40.304495 1081367 command_runner.go:130] > # enable_tracing = false
	I0717 19:09:40.304501 1081367 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 19:09:40.304505 1081367 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 19:09:40.304510 1081367 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 19:09:40.304522 1081367 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 19:09:40.304531 1081367 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 19:09:40.304535 1081367 command_runner.go:130] > [crio.stats]
	I0717 19:09:40.304543 1081367 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 19:09:40.304548 1081367 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 19:09:40.304555 1081367 command_runner.go:130] > # stats_collection_period = 0
	I0717 19:09:40.304647 1081367 cni.go:84] Creating CNI manager for ""
	I0717 19:09:40.304662 1081367 cni.go:137] 1 nodes found, recommending kindnet
	I0717 19:09:40.304683 1081367 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:09:40.304701 1081367 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-464644 NodeName:multinode-464644 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:09:40.304893 1081367 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-464644"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:09:40.305009 1081367 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-464644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-464644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:09:40.305087 1081367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:09:40.315446 1081367 command_runner.go:130] > kubeadm
	I0717 19:09:40.315471 1081367 command_runner.go:130] > kubectl
	I0717 19:09:40.315476 1081367 command_runner.go:130] > kubelet
	I0717 19:09:40.315552 1081367 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:09:40.315636 1081367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:09:40.325195 1081367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0717 19:09:40.343904 1081367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:09:40.362427 1081367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0717 19:09:40.380090 1081367 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0717 19:09:40.384249 1081367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:09:40.398368 1081367 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644 for IP: 192.168.39.174
	I0717 19:09:40.398408 1081367 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:09:40.398631 1081367 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:09:40.398679 1081367 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:09:40.398728 1081367 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key
	I0717 19:09:40.398741 1081367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt with IP's: []
	I0717 19:09:40.530377 1081367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt ...
	I0717 19:09:40.530419 1081367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt: {Name:mk73030bbef1520292c52edf06fe426003aa9a4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:09:40.530621 1081367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key ...
	I0717 19:09:40.530635 1081367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key: {Name:mk45721f5374e64814d47b7d6c2d3113cb64cb21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:09:40.530717 1081367 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.key.4baccf75
	I0717 19:09:40.530731 1081367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.crt.4baccf75 with IP's: [192.168.39.174 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 19:09:40.731531 1081367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.crt.4baccf75 ...
	I0717 19:09:40.731572 1081367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.crt.4baccf75: {Name:mk2a150ba5a07582a0686e05ffd3672e0116709f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:09:40.731748 1081367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.key.4baccf75 ...
	I0717 19:09:40.731761 1081367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.key.4baccf75: {Name:mk661568e0757e24c373bd56c5bbe31f9a57d35d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:09:40.731851 1081367 certs.go:337] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.crt.4baccf75 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.crt
	I0717 19:09:40.731916 1081367 certs.go:341] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.key.4baccf75 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.key
	I0717 19:09:40.731964 1081367 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.key
	I0717 19:09:40.731985 1081367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.crt with IP's: []
	I0717 19:09:41.133657 1081367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.crt ...
	I0717 19:09:41.133696 1081367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.crt: {Name:mk432b2d80b9495a7e1ff9ec7f9ebd891506ddec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:09:41.133927 1081367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.key ...
	I0717 19:09:41.133945 1081367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.key: {Name:mk9816f326d7cf3f6e8c9a626d936e905a19ccd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:09:41.134042 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 19:09:41.134072 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 19:09:41.134090 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 19:09:41.134105 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 19:09:41.134120 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 19:09:41.134140 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 19:09:41.134171 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 19:09:41.134189 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 19:09:41.134258 1081367 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:09:41.134312 1081367 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:09:41.134327 1081367 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:09:41.134361 1081367 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:09:41.134397 1081367 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:09:41.134430 1081367 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:09:41.134483 1081367 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:09:41.134536 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> /usr/share/ca-certificates/10689542.pem
	I0717 19:09:41.134558 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:09:41.134575 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem -> /usr/share/ca-certificates/1068954.pem
	I0717 19:09:41.135120 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:09:41.163002 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:09:41.189687 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:09:41.219039 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:09:41.246263 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:09:41.273679 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:09:41.299985 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:09:41.325692 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:09:41.351679 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:09:41.377098 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:09:41.403005 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:09:41.427691 1081367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:09:41.445916 1081367 ssh_runner.go:195] Run: openssl version
	I0717 19:09:41.452551 1081367 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0717 19:09:41.452686 1081367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:09:41.463918 1081367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:09:41.469327 1081367 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:09:41.469377 1081367 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:09:41.469430 1081367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:09:41.475568 1081367 command_runner.go:130] > 51391683
	I0717 19:09:41.475712 1081367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:09:41.486725 1081367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:09:41.499012 1081367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:09:41.504565 1081367 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:09:41.504618 1081367 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:09:41.504673 1081367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:09:41.510603 1081367 command_runner.go:130] > 3ec20f2e
	I0717 19:09:41.510952 1081367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:09:41.522568 1081367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:09:41.533547 1081367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:09:41.538754 1081367 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:09:41.538913 1081367 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:09:41.538972 1081367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:09:41.545117 1081367 command_runner.go:130] > b5213941
	I0717 19:09:41.545233 1081367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:09:41.556180 1081367 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:09:41.561157 1081367 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 19:09:41.561227 1081367 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 19:09:41.561272 1081367 kubeadm.go:404] StartCluster: {Name:multinode-464644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-464644 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:09:41.561393 1081367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:09:41.561445 1081367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:09:41.595972 1081367 cri.go:89] found id: ""
	I0717 19:09:41.596058 1081367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:09:41.605897 1081367 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0717 19:09:41.605928 1081367 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0717 19:09:41.605935 1081367 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0717 19:09:41.606016 1081367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:09:41.615560 1081367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:09:41.624982 1081367 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0717 19:09:41.625021 1081367 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0717 19:09:41.625030 1081367 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0717 19:09:41.625040 1081367 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:09:41.625079 1081367 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:09:41.625146 1081367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:09:41.988460 1081367 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:09:41.988485 1081367 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:09:54.307793 1081367 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 19:09:54.307842 1081367 command_runner.go:130] > [init] Using Kubernetes version: v1.27.3
	I0717 19:09:54.307903 1081367 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 19:09:54.307911 1081367 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 19:09:54.308010 1081367 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:09:54.308046 1081367 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:09:54.308222 1081367 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:09:54.308239 1081367 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:09:54.308372 1081367 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:09:54.308385 1081367 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:09:54.308463 1081367 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:09:54.311229 1081367 out.go:204]   - Generating certificates and keys ...
	I0717 19:09:54.308505 1081367 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:09:54.311370 1081367 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0717 19:09:54.311383 1081367 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 19:09:54.311471 1081367 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0717 19:09:54.311479 1081367 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 19:09:54.311593 1081367 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 19:09:54.311618 1081367 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 19:09:54.311735 1081367 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0717 19:09:54.311755 1081367 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 19:09:54.311816 1081367 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0717 19:09:54.311822 1081367 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 19:09:54.311885 1081367 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0717 19:09:54.311897 1081367 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 19:09:54.311968 1081367 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0717 19:09:54.311978 1081367 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 19:09:54.312074 1081367 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-464644] and IPs [192.168.39.174 127.0.0.1 ::1]
	I0717 19:09:54.312081 1081367 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-464644] and IPs [192.168.39.174 127.0.0.1 ::1]
	I0717 19:09:54.312123 1081367 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0717 19:09:54.312129 1081367 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 19:09:54.312235 1081367 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-464644] and IPs [192.168.39.174 127.0.0.1 ::1]
	I0717 19:09:54.312244 1081367 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-464644] and IPs [192.168.39.174 127.0.0.1 ::1]
	I0717 19:09:54.312308 1081367 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 19:09:54.312315 1081367 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 19:09:54.312372 1081367 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 19:09:54.312378 1081367 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 19:09:54.312415 1081367 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0717 19:09:54.312421 1081367 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 19:09:54.312488 1081367 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:09:54.312497 1081367 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:09:54.312554 1081367 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:09:54.312563 1081367 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:09:54.312613 1081367 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:09:54.312628 1081367 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:09:54.312697 1081367 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:09:54.312707 1081367 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:09:54.312752 1081367 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:09:54.312759 1081367 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:09:54.312838 1081367 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:09:54.312846 1081367 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:09:54.312915 1081367 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:09:54.312923 1081367 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:09:54.312966 1081367 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 19:09:54.312973 1081367 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 19:09:54.313040 1081367 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:09:54.313048 1081367 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:09:54.316549 1081367 out.go:204]   - Booting up control plane ...
	I0717 19:09:54.316676 1081367 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:09:54.316691 1081367 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:09:54.316771 1081367 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:09:54.316782 1081367 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:09:54.316854 1081367 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:09:54.316870 1081367 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:09:54.316978 1081367 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:09:54.316988 1081367 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:09:54.317191 1081367 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:09:54.317217 1081367 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:09:54.317288 1081367 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.506399 seconds
	I0717 19:09:54.317295 1081367 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506399 seconds
	I0717 19:09:54.317403 1081367 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:09:54.317419 1081367 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:09:54.317577 1081367 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:09:54.317587 1081367 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:09:54.317650 1081367 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:09:54.317670 1081367 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:09:54.317933 1081367 command_runner.go:130] > [mark-control-plane] Marking the node multinode-464644 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:09:54.317944 1081367 kubeadm.go:322] [mark-control-plane] Marking the node multinode-464644 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:09:54.318029 1081367 command_runner.go:130] > [bootstrap-token] Using token: uqi5bv.w9s0o8txtdaswx46
	I0717 19:09:54.318044 1081367 kubeadm.go:322] [bootstrap-token] Using token: uqi5bv.w9s0o8txtdaswx46
	I0717 19:09:54.320141 1081367 out.go:204]   - Configuring RBAC rules ...
	I0717 19:09:54.320254 1081367 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:09:54.320265 1081367 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:09:54.320348 1081367 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:09:54.320356 1081367 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:09:54.320509 1081367 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:09:54.320530 1081367 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:09:54.320697 1081367 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:09:54.320709 1081367 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:09:54.320804 1081367 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:09:54.320811 1081367 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:09:54.320879 1081367 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:09:54.320885 1081367 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:09:54.320980 1081367 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:09:54.320986 1081367 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:09:54.321022 1081367 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0717 19:09:54.321036 1081367 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 19:09:54.321079 1081367 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0717 19:09:54.321084 1081367 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 19:09:54.321088 1081367 kubeadm.go:322] 
	I0717 19:09:54.321144 1081367 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0717 19:09:54.321160 1081367 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 19:09:54.321166 1081367 kubeadm.go:322] 
	I0717 19:09:54.321232 1081367 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0717 19:09:54.321239 1081367 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 19:09:54.321246 1081367 kubeadm.go:322] 
	I0717 19:09:54.321273 1081367 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0717 19:09:54.321279 1081367 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 19:09:54.321328 1081367 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:09:54.321334 1081367 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:09:54.321374 1081367 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:09:54.321380 1081367 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:09:54.321383 1081367 kubeadm.go:322] 
	I0717 19:09:54.321439 1081367 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0717 19:09:54.321452 1081367 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 19:09:54.321457 1081367 kubeadm.go:322] 
	I0717 19:09:54.321535 1081367 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:09:54.321548 1081367 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:09:54.321567 1081367 kubeadm.go:322] 
	I0717 19:09:54.321646 1081367 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0717 19:09:54.321657 1081367 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 19:09:54.321754 1081367 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:09:54.321762 1081367 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:09:54.321828 1081367 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:09:54.321841 1081367 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:09:54.321845 1081367 kubeadm.go:322] 
	I0717 19:09:54.321916 1081367 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:09:54.321923 1081367 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:09:54.321979 1081367 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0717 19:09:54.321985 1081367 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 19:09:54.321989 1081367 kubeadm.go:322] 
	I0717 19:09:54.322058 1081367 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token uqi5bv.w9s0o8txtdaswx46 \
	I0717 19:09:54.322064 1081367 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uqi5bv.w9s0o8txtdaswx46 \
	I0717 19:09:54.322181 1081367 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 19:09:54.322193 1081367 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 19:09:54.322214 1081367 command_runner.go:130] > 	--control-plane 
	I0717 19:09:54.322222 1081367 kubeadm.go:322] 	--control-plane 
	I0717 19:09:54.322227 1081367 kubeadm.go:322] 
	I0717 19:09:54.322319 1081367 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:09:54.322327 1081367 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:09:54.322331 1081367 kubeadm.go:322] 
	I0717 19:09:54.322393 1081367 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token uqi5bv.w9s0o8txtdaswx46 \
	I0717 19:09:54.322399 1081367 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uqi5bv.w9s0o8txtdaswx46 \
	I0717 19:09:54.322475 1081367 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 19:09:54.322498 1081367 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 19:09:54.322526 1081367 cni.go:84] Creating CNI manager for ""
	I0717 19:09:54.322551 1081367 cni.go:137] 1 nodes found, recommending kindnet
	I0717 19:09:54.326229 1081367 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 19:09:54.328412 1081367 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 19:09:54.342500 1081367 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 19:09:54.342537 1081367 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0717 19:09:54.342563 1081367 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0717 19:09:54.342572 1081367 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:09:54.342580 1081367 command_runner.go:130] > Access: 2023-07-17 19:09:18.114596432 +0000
	I0717 19:09:54.342586 1081367 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0717 19:09:54.342593 1081367 command_runner.go:130] > Change: 2023-07-17 19:09:16.153596432 +0000
	I0717 19:09:54.342599 1081367 command_runner.go:130] >  Birth: -
	I0717 19:09:54.342681 1081367 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 19:09:54.342699 1081367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 19:09:54.399083 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 19:09:55.606925 1081367 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0717 19:09:55.621651 1081367 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0717 19:09:55.637735 1081367 command_runner.go:130] > serviceaccount/kindnet created
	I0717 19:09:55.657020 1081367 command_runner.go:130] > daemonset.apps/kindnet created
	I0717 19:09:55.659683 1081367 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.260553982s)
	I0717 19:09:55.659735 1081367 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:09:55.659833 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:09:55.659897 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=multinode-464644 minikube.k8s.io/updated_at=2023_07_17T19_09_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:09:55.683078 1081367 command_runner.go:130] > -16
	I0717 19:09:55.683232 1081367 ops.go:34] apiserver oom_adj: -16
	I0717 19:09:55.843818 1081367 command_runner.go:130] > node/multinode-464644 labeled
	I0717 19:09:55.852282 1081367 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0717 19:09:55.852440 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:09:55.946815 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:09:56.447810 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:09:56.544578 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:09:56.947203 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:09:57.045760 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:09:57.447384 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:09:57.540771 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:09:57.947469 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:09:58.033743 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:09:58.448060 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:09:58.545111 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:09:58.947487 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:09:59.049377 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:09:59.447803 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:09:59.540760 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:09:59.948040 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:00.041205 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:00.447864 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:00.539549 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:00.948033 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:01.046017 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:01.447779 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:01.541407 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:01.948098 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:02.043256 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:02.447974 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:02.547029 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:02.947718 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:03.039274 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:03.447419 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:03.546859 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:03.947387 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:04.042264 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:04.447104 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:04.555332 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:04.947998 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:05.057700 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:05.447306 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:05.587814 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:05.947684 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:06.041157 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:06.447159 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:06.557166 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:06.947380 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:07.047030 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:07.447682 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:07.590863 1081367 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:10:07.947079 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:10:08.059074 1081367 command_runner.go:130] > NAME      SECRETS   AGE
	I0717 19:10:08.059365 1081367 command_runner.go:130] > default   0         1s
	I0717 19:10:08.061426 1081367 kubeadm.go:1081] duration metric: took 12.401665685s to wait for elevateKubeSystemPrivileges.
	I0717 19:10:08.061468 1081367 kubeadm.go:406] StartCluster complete in 26.500198521s
	I0717 19:10:08.061496 1081367 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:10:08.061609 1081367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:10:08.062372 1081367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:10:08.062691 1081367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:10:08.062819 1081367 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:10:08.062922 1081367 addons.go:69] Setting storage-provisioner=true in profile "multinode-464644"
	I0717 19:10:08.062942 1081367 addons.go:231] Setting addon storage-provisioner=true in "multinode-464644"
	I0717 19:10:08.062944 1081367 addons.go:69] Setting default-storageclass=true in profile "multinode-464644"
	I0717 19:10:08.062955 1081367 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:10:08.062969 1081367 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:10:08.062975 1081367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-464644"
	I0717 19:10:08.063008 1081367 host.go:66] Checking if "multinode-464644" exists ...
	I0717 19:10:08.063280 1081367 kapi.go:59] client config for multinode-464644: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:10:08.063475 1081367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:10:08.063512 1081367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:10:08.063513 1081367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:10:08.063632 1081367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:10:08.064139 1081367 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 19:10:08.064750 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 19:10:08.064770 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:08.064781 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:08.064789 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:08.075093 1081367 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 19:10:08.075128 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:08.075139 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:08.075149 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:08.075158 1081367 round_trippers.go:580]     Content-Length: 291
	I0717 19:10:08.075167 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:08 GMT
	I0717 19:10:08.075177 1081367 round_trippers.go:580]     Audit-Id: 8a2a9193-235f-426d-a264-8065fc020328
	I0717 19:10:08.075187 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:08.075199 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:08.075233 1081367 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"06c3326f-def8-45bf-a91d-f07feefe253d","resourceVersion":"259","creationTimestamp":"2023-07-17T19:09:54Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 19:10:08.075818 1081367 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"06c3326f-def8-45bf-a91d-f07feefe253d","resourceVersion":"259","creationTimestamp":"2023-07-17T19:09:54Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 19:10:08.075891 1081367 round_trippers.go:463] PUT https://192.168.39.174:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 19:10:08.075905 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:08.075917 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:08.075926 1081367 round_trippers.go:473]     Content-Type: application/json
	I0717 19:10:08.075939 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:08.080451 1081367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0717 19:10:08.080564 1081367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33191
	I0717 19:10:08.081027 1081367 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:10:08.081083 1081367 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:10:08.081649 1081367 main.go:141] libmachine: Using API Version  1
	I0717 19:10:08.081667 1081367 main.go:141] libmachine: Using API Version  1
	I0717 19:10:08.081676 1081367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:10:08.081687 1081367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:10:08.082059 1081367 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:10:08.082085 1081367 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:10:08.082306 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetState
	I0717 19:10:08.082633 1081367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:10:08.082672 1081367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:10:08.084762 1081367 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:10:08.085089 1081367 kapi.go:59] client config for multinode-464644: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:10:08.085542 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/apis/storage.k8s.io/v1/storageclasses
	I0717 19:10:08.085575 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:08.085589 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:08.085598 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:08.090256 1081367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:10:08.090286 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:08.090298 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:08.090308 1081367 round_trippers.go:580]     Content-Length: 109
	I0717 19:10:08.090316 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:08 GMT
	I0717 19:10:08.090324 1081367 round_trippers.go:580]     Audit-Id: 5d7dd353-40c2-40b8-bd77-e7ef821d087b
	I0717 19:10:08.090333 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:08.090345 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:08.090355 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:08.090389 1081367 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"348"},"items":[]}
	I0717 19:10:08.090728 1081367 addons.go:231] Setting addon default-storageclass=true in "multinode-464644"
	I0717 19:10:08.090775 1081367 host.go:66] Checking if "multinode-464644" exists ...
	I0717 19:10:08.091192 1081367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:10:08.091227 1081367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:10:08.093595 1081367 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0717 19:10:08.093625 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:08.093637 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:08.093652 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:08.093666 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:08.093677 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:08.093686 1081367 round_trippers.go:580]     Content-Length: 291
	I0717 19:10:08.093702 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:08 GMT
	I0717 19:10:08.093711 1081367 round_trippers.go:580]     Audit-Id: 2ab46849-8b57-4f34-b146-22af92c76590
	I0717 19:10:08.093760 1081367 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"06c3326f-def8-45bf-a91d-f07feefe253d","resourceVersion":"348","creationTimestamp":"2023-07-17T19:09:54Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 19:10:08.099154 1081367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40861
	I0717 19:10:08.099684 1081367 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:10:08.100376 1081367 main.go:141] libmachine: Using API Version  1
	I0717 19:10:08.100413 1081367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:10:08.100783 1081367 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:10:08.101045 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetState
	I0717 19:10:08.103095 1081367 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:10:08.106402 1081367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:10:08.107952 1081367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41275
	I0717 19:10:08.108984 1081367 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:10:08.109016 1081367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:10:08.109048 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:10:08.109497 1081367 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:10:08.110066 1081367 main.go:141] libmachine: Using API Version  1
	I0717 19:10:08.110090 1081367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:10:08.110496 1081367 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:10:08.111137 1081367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:10:08.111194 1081367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:10:08.112441 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:10:08.112846 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:10:08.112873 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:10:08.113018 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:10:08.113214 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:10:08.113398 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:10:08.113588 1081367 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:10:08.127396 1081367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41455
	I0717 19:10:08.127917 1081367 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:10:08.128485 1081367 main.go:141] libmachine: Using API Version  1
	I0717 19:10:08.128520 1081367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:10:08.128955 1081367 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:10:08.129195 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetState
	I0717 19:10:08.130881 1081367 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:10:08.131204 1081367 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:10:08.131224 1081367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:10:08.131250 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:10:08.134367 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:10:08.134924 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:10:08.134957 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:10:08.135124 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:10:08.135325 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:10:08.135512 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:10:08.135703 1081367 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:10:08.235210 1081367 command_runner.go:130] > apiVersion: v1
	I0717 19:10:08.235237 1081367 command_runner.go:130] > data:
	I0717 19:10:08.235241 1081367 command_runner.go:130] >   Corefile: |
	I0717 19:10:08.235245 1081367 command_runner.go:130] >     .:53 {
	I0717 19:10:08.235249 1081367 command_runner.go:130] >         errors
	I0717 19:10:08.235254 1081367 command_runner.go:130] >         health {
	I0717 19:10:08.235258 1081367 command_runner.go:130] >            lameduck 5s
	I0717 19:10:08.235262 1081367 command_runner.go:130] >         }
	I0717 19:10:08.235265 1081367 command_runner.go:130] >         ready
	I0717 19:10:08.235272 1081367 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0717 19:10:08.235276 1081367 command_runner.go:130] >            pods insecure
	I0717 19:10:08.235281 1081367 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0717 19:10:08.235286 1081367 command_runner.go:130] >            ttl 30
	I0717 19:10:08.235290 1081367 command_runner.go:130] >         }
	I0717 19:10:08.235294 1081367 command_runner.go:130] >         prometheus :9153
	I0717 19:10:08.235298 1081367 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0717 19:10:08.235303 1081367 command_runner.go:130] >            max_concurrent 1000
	I0717 19:10:08.235306 1081367 command_runner.go:130] >         }
	I0717 19:10:08.235318 1081367 command_runner.go:130] >         cache 30
	I0717 19:10:08.235323 1081367 command_runner.go:130] >         loop
	I0717 19:10:08.235329 1081367 command_runner.go:130] >         reload
	I0717 19:10:08.235334 1081367 command_runner.go:130] >         loadbalance
	I0717 19:10:08.235339 1081367 command_runner.go:130] >     }
	I0717 19:10:08.235346 1081367 command_runner.go:130] > kind: ConfigMap
	I0717 19:10:08.235355 1081367 command_runner.go:130] > metadata:
	I0717 19:10:08.235364 1081367 command_runner.go:130] >   creationTimestamp: "2023-07-17T19:09:54Z"
	I0717 19:10:08.235371 1081367 command_runner.go:130] >   name: coredns
	I0717 19:10:08.235377 1081367 command_runner.go:130] >   namespace: kube-system
	I0717 19:10:08.235385 1081367 command_runner.go:130] >   resourceVersion: "255"
	I0717 19:10:08.235392 1081367 command_runner.go:130] >   uid: 13425687-4297-46fd-ae23-038f5de0a562
	I0717 19:10:08.244404 1081367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 19:10:08.283622 1081367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:10:08.310196 1081367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:10:08.594299 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 19:10:08.594325 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:08.594334 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:08.594341 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:08.731038 1081367 round_trippers.go:574] Response Status: 200 OK in 136 milliseconds
	I0717 19:10:08.731064 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:08.731072 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:08.731077 1081367 round_trippers.go:580]     Content-Length: 291
	I0717 19:10:08.731083 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:08 GMT
	I0717 19:10:08.731088 1081367 round_trippers.go:580]     Audit-Id: 7f370f0a-fe9c-48ab-a85d-36daa312f311
	I0717 19:10:08.731094 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:08.731101 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:08.731109 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:08.731143 1081367 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"06c3326f-def8-45bf-a91d-f07feefe253d","resourceVersion":"371","creationTimestamp":"2023-07-17T19:09:54Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 19:10:08.731268 1081367 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-464644" context rescaled to 1 replicas
	I0717 19:10:08.731304 1081367 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:10:08.735216 1081367 out.go:177] * Verifying Kubernetes components...
	I0717 19:10:08.737752 1081367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:10:09.369967 1081367 command_runner.go:130] > configmap/coredns replaced
	I0717 19:10:09.372637 1081367 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.128188777s)
	I0717 19:10:09.372678 1081367 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 19:10:09.560056 1081367 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0717 19:10:09.577038 1081367 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0717 19:10:09.592701 1081367 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0717 19:10:09.602450 1081367 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0717 19:10:09.613479 1081367 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0717 19:10:09.633777 1081367 command_runner.go:130] > pod/storage-provisioner created
	I0717 19:10:09.636430 1081367 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0717 19:10:09.636429 1081367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.352763325s)
	I0717 19:10:09.636476 1081367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.32625253s)
	I0717 19:10:09.636509 1081367 main.go:141] libmachine: Making call to close driver server
	I0717 19:10:09.636517 1081367 main.go:141] libmachine: Making call to close driver server
	I0717 19:10:09.636529 1081367 main.go:141] libmachine: (multinode-464644) Calling .Close
	I0717 19:10:09.636529 1081367 main.go:141] libmachine: (multinode-464644) Calling .Close
	I0717 19:10:09.636894 1081367 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:10:09.636911 1081367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:10:09.636920 1081367 main.go:141] libmachine: Making call to close driver server
	I0717 19:10:09.636924 1081367 main.go:141] libmachine: (multinode-464644) DBG | Closing plugin on server side
	I0717 19:10:09.636929 1081367 main.go:141] libmachine: (multinode-464644) Calling .Close
	I0717 19:10:09.636941 1081367 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:10:09.636949 1081367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:10:09.637011 1081367 main.go:141] libmachine: (multinode-464644) DBG | Closing plugin on server side
	I0717 19:10:09.637071 1081367 main.go:141] libmachine: Making call to close driver server
	I0717 19:10:09.637091 1081367 main.go:141] libmachine: (multinode-464644) Calling .Close
	I0717 19:10:09.637157 1081367 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:10:09.637205 1081367 main.go:141] libmachine: (multinode-464644) DBG | Closing plugin on server side
	I0717 19:10:09.637175 1081367 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:10:09.637276 1081367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:10:09.637451 1081367 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:10:09.637462 1081367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:10:09.637474 1081367 main.go:141] libmachine: Making call to close driver server
	I0717 19:10:09.637490 1081367 main.go:141] libmachine: (multinode-464644) Calling .Close
	I0717 19:10:09.637506 1081367 kapi.go:59] client config for multinode-464644: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:10:09.637832 1081367 main.go:141] libmachine: (multinode-464644) DBG | Closing plugin on server side
	I0717 19:10:09.637896 1081367 node_ready.go:35] waiting up to 6m0s for node "multinode-464644" to be "Ready" ...
	I0717 19:10:09.637944 1081367 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:10:09.638901 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:09.638909 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:09.638923 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:09.638932 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:09.641511 1081367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:10:09.644556 1081367 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 19:10:09.646536 1081367 addons.go:502] enable addons completed in 1.583710394s: enabled=[storage-provisioner default-storageclass]
	I0717 19:10:09.648003 1081367 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 19:10:09.648023 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:09.648031 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:09.648036 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:09.648042 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:09.648047 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:09 GMT
	I0717 19:10:09.648052 1081367 round_trippers.go:580]     Audit-Id: c5003032-81e6-4766-babd-4d22a5a485e2
	I0717 19:10:09.648057 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:09.648182 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"347","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 19:10:10.149689 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:10.149713 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:10.149723 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:10.149729 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:10.152804 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:10.152831 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:10.152839 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:10.152845 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:10.152851 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:10.152856 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:10.152861 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:10 GMT
	I0717 19:10:10.152867 1081367 round_trippers.go:580]     Audit-Id: c56f08ba-550e-456d-81d3-7a61ade83508
	I0717 19:10:10.152978 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"347","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 19:10:10.649674 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:10.649701 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:10.649710 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:10.649715 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:10.653173 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:10.653206 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:10.653214 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:10.653227 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:10 GMT
	I0717 19:10:10.653239 1081367 round_trippers.go:580]     Audit-Id: 16a8e47b-142a-44ca-8a91-0701aca488db
	I0717 19:10:10.653247 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:10.653256 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:10.653265 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:10.653393 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"347","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 19:10:11.149996 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:11.150020 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:11.150029 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:11.150035 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:11.153056 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:11.153079 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:11.153086 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:11 GMT
	I0717 19:10:11.153093 1081367 round_trippers.go:580]     Audit-Id: cf7e94fe-314c-47d1-a004-824c67cd2320
	I0717 19:10:11.153101 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:11.153110 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:11.153118 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:11.153127 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:11.153274 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"347","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 19:10:11.649891 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:11.649925 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:11.649935 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:11.649942 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:11.655072 1081367 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 19:10:11.655101 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:11.655109 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:11.655114 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:11.655120 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:11.655131 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:11 GMT
	I0717 19:10:11.655136 1081367 round_trippers.go:580]     Audit-Id: 5b7fc33a-121d-4a09-b6e0-228d7e0b9c33
	I0717 19:10:11.655142 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:11.655267 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"347","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 19:10:11.655641 1081367 node_ready.go:58] node "multinode-464644" has status "Ready":"False"
	I0717 19:10:12.149936 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:12.149968 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:12.149976 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:12.149982 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:12.154179 1081367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:10:12.154217 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:12.154233 1081367 round_trippers.go:580]     Audit-Id: fe6d36d5-6637-4641-b8ca-2f6c41841bce
	I0717 19:10:12.154241 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:12.154247 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:12.154253 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:12.154259 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:12.154264 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:12 GMT
	I0717 19:10:12.154412 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"347","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 19:10:12.650026 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:12.650056 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:12.650065 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:12.650071 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:12.653587 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:12.653619 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:12.653628 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:12.653634 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:12.653642 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:12 GMT
	I0717 19:10:12.653648 1081367 round_trippers.go:580]     Audit-Id: a859c252-3874-44dd-a3bb-f4bb26c0c92e
	I0717 19:10:12.653653 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:12.653659 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:12.653763 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"347","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 19:10:13.149035 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:13.149069 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:13.149078 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:13.149085 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:13.152865 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:13.152893 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:13.152901 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:13.152906 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:13.152912 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:13.152918 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:13.152926 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:13 GMT
	I0717 19:10:13.152931 1081367 round_trippers.go:580]     Audit-Id: c6eba607-5f2f-45a6-bfaa-235f182752eb
	I0717 19:10:13.153113 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"347","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 19:10:13.649720 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:13.649749 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:13.649758 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:13.649764 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:13.653267 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:13.653299 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:13.653307 1081367 round_trippers.go:580]     Audit-Id: 7a9bbb73-b85c-44ea-b1c7-441e58bc447c
	I0717 19:10:13.653315 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:13.653321 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:13.653327 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:13.653332 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:13.653337 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:13 GMT
	I0717 19:10:13.653718 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"347","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 19:10:14.149374 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:14.149405 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:14.149417 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:14.149425 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:14.159971 1081367 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 19:10:14.160010 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:14.160018 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:14 GMT
	I0717 19:10:14.160025 1081367 round_trippers.go:580]     Audit-Id: aef9c835-996d-40a8-8348-0dd5746dceed
	I0717 19:10:14.160034 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:14.160043 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:14.160051 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:14.160059 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:14.160916 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:14.161265 1081367 node_ready.go:49] node "multinode-464644" has status "Ready":"True"
	I0717 19:10:14.161280 1081367 node_ready.go:38] duration metric: took 4.523363331s waiting for node "multinode-464644" to be "Ready" ...
	I0717 19:10:14.161289 1081367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:10:14.161372 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:10:14.161380 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:14.161387 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:14.161393 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:14.167031 1081367 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 19:10:14.167067 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:14.167077 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:14.167086 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:14.167093 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:14.167102 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:14.167110 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:14 GMT
	I0717 19:10:14.167122 1081367 round_trippers.go:580]     Audit-Id: fcb27c40-f96f-4e86-b3fe-78d4202408f3
	I0717 19:10:14.167731 1081367 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"429","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54819 chars]
	I0717 19:10:14.170943 1081367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:14.171035 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:10:14.171047 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:14.171055 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:14.171061 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:14.174048 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:14.174070 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:14.174082 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:14.174091 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:14 GMT
	I0717 19:10:14.174100 1081367 round_trippers.go:580]     Audit-Id: 7616b4a3-78e8-41ce-baff-e72044664161
	I0717 19:10:14.174109 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:14.174118 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:14.174125 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:14.174268 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"429","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0717 19:10:14.174719 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:14.174732 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:14.174740 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:14.174746 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:14.177331 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:14.177348 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:14.177355 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:14.177360 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:14.177366 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:14.177373 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:14.177382 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:14 GMT
	I0717 19:10:14.177391 1081367 round_trippers.go:580]     Audit-Id: a59f3943-30b6-4785-a88b-4eab1c06aa62
	I0717 19:10:14.177636 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:14.678513 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:10:14.678541 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:14.678550 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:14.678557 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:14.682258 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:14.682286 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:14.682294 1081367 round_trippers.go:580]     Audit-Id: 024d089d-f3a0-430a-8550-858b0f65804d
	I0717 19:10:14.682300 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:14.682306 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:14.682313 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:14.682319 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:14.682325 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:14 GMT
	I0717 19:10:14.682431 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"429","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0717 19:10:14.682903 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:14.682916 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:14.682923 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:14.682929 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:14.686410 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:14.686437 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:14.686450 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:14 GMT
	I0717 19:10:14.686459 1081367 round_trippers.go:580]     Audit-Id: 457021be-64bb-4606-8e01-368a6eee5fc2
	I0717 19:10:14.686466 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:14.686473 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:14.686481 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:14.686488 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:14.686821 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:15.178402 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:10:15.178433 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:15.178444 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:15.178450 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:15.181707 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:15.181738 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:15.181746 1081367 round_trippers.go:580]     Audit-Id: e3495441-301b-4d90-af34-10e0836c9ca6
	I0717 19:10:15.181752 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:15.181757 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:15.181762 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:15.181768 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:15.181776 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:15 GMT
	I0717 19:10:15.182020 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"429","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0717 19:10:15.182719 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:15.182743 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:15.182755 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:15.182764 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:15.188626 1081367 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 19:10:15.188656 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:15.188671 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:15.188677 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:15 GMT
	I0717 19:10:15.188685 1081367 round_trippers.go:580]     Audit-Id: f7bb2eca-4d2b-41bb-97f8-33f2d262c284
	I0717 19:10:15.188693 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:15.188701 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:15.188709 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:15.188874 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:15.678192 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:10:15.678224 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:15.678235 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:15.678244 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:15.681468 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:15.681493 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:15.681500 1081367 round_trippers.go:580]     Audit-Id: ded181bc-1a61-4206-98fa-36d54455748a
	I0717 19:10:15.681506 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:15.681512 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:15.681520 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:15.681527 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:15.681534 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:15 GMT
	I0717 19:10:15.681931 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"444","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6494 chars]
	I0717 19:10:15.682579 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:15.682596 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:15.682608 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:15.682618 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:15.689054 1081367 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 19:10:15.689084 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:15.689094 1081367 round_trippers.go:580]     Audit-Id: adf169c4-5bad-46e0-9716-68f205d32fd7
	I0717 19:10:15.689103 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:15.689110 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:15.689117 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:15.689127 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:15.689139 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:15 GMT
	I0717 19:10:15.689293 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:16.178965 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:10:16.178992 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:16.179001 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:16.179007 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:16.182683 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:16.182717 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:16.182728 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:16.182734 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:16.182740 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:16.182745 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:16.182754 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:16 GMT
	I0717 19:10:16.182762 1081367 round_trippers.go:580]     Audit-Id: 706ee84b-43f0-4691-b6ae-2a16ce097b5b
	I0717 19:10:16.182902 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"444","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6494 chars]
	I0717 19:10:16.183422 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:16.183437 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:16.183444 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:16.183451 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:16.186003 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:16.186027 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:16.186037 1081367 round_trippers.go:580]     Audit-Id: 8ba25f50-b2eb-4617-879b-0f68f6d6daae
	I0717 19:10:16.186047 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:16.186055 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:16.186061 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:16.186067 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:16.186072 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:16 GMT
	I0717 19:10:16.186264 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:16.186578 1081367 pod_ready.go:102] pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace has status "Ready":"False"
	I0717 19:10:16.679044 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:10:16.679075 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:16.679084 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:16.679090 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:16.682258 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:16.682283 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:16.682291 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:16 GMT
	I0717 19:10:16.682296 1081367 round_trippers.go:580]     Audit-Id: c7463b40-afa7-455f-b2b7-9116f09cd752
	I0717 19:10:16.682302 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:16.682307 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:16.682313 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:16.682318 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:16.683077 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"448","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0717 19:10:16.683580 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:16.683592 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:16.683600 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:16.683606 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:16.686947 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:16.686975 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:16.686983 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:16.686988 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:16.686994 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:16.687004 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:16.687013 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:16 GMT
	I0717 19:10:16.687023 1081367 round_trippers.go:580]     Audit-Id: 079eb51a-2685-4051-afdf-e0e34d12083e
	I0717 19:10:16.687138 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:16.687475 1081367 pod_ready.go:92] pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace has status "Ready":"True"
	I0717 19:10:16.687490 1081367 pod_ready.go:81] duration metric: took 2.516519938s waiting for pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:16.687499 1081367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:16.687557 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-464644
	I0717 19:10:16.687565 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:16.687572 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:16.687578 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:16.690444 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:16.690467 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:16.690478 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:16.690485 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:16 GMT
	I0717 19:10:16.690494 1081367 round_trippers.go:580]     Audit-Id: fc45e6a8-bb74-4e8b-8057-08588bf95e5b
	I0717 19:10:16.690501 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:16.690511 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:16.690520 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:16.690974 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-464644","namespace":"kube-system","uid":"b672d395-d32d-4198-b486-d9cff48d8b9a","resourceVersion":"433","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.mirror":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.seen":"2023-07-17T19:09:54.339578401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0717 19:10:16.691366 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:16.691377 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:16.691384 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:16.691390 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:16.694619 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:16.694639 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:16.694645 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:16.694652 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:16.694658 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:16.694663 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:16 GMT
	I0717 19:10:16.694669 1081367 round_trippers.go:580]     Audit-Id: 0e5a6b93-0024-4ff4-86d8-4bb18d82f667
	I0717 19:10:16.694674 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:16.694813 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:16.695131 1081367 pod_ready.go:92] pod "etcd-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:10:16.695143 1081367 pod_ready.go:81] duration metric: took 7.637917ms waiting for pod "etcd-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:16.695154 1081367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:16.695221 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-464644
	I0717 19:10:16.695229 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:16.695236 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:16.695242 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:16.697346 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:16.697363 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:16.697369 1081367 round_trippers.go:580]     Audit-Id: 04387400-fdfe-4b73-9dab-4d960b5488ae
	I0717 19:10:16.697374 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:16.697380 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:16.697385 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:16.697391 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:16.697396 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:16 GMT
	I0717 19:10:16.697786 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-464644","namespace":"kube-system","uid":"dd6e14e2-0b92-42b9-b6a2-1562c2c70903","resourceVersion":"432","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.174:8443","kubernetes.io/config.hash":"b280034e13df00701aec7afc575fcc6c","kubernetes.io/config.mirror":"b280034e13df00701aec7afc575fcc6c","kubernetes.io/config.seen":"2023-07-17T19:09:54.339586957Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0717 19:10:16.698197 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:16.698208 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:16.698215 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:16.698221 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:16.700320 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:16.700340 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:16.700348 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:16.700354 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:16 GMT
	I0717 19:10:16.700359 1081367 round_trippers.go:580]     Audit-Id: 1d86b7e5-43fd-4ab6-b9e4-3ff31d1c5722
	I0717 19:10:16.700364 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:16.700369 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:16.700374 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:16.700541 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:16.700874 1081367 pod_ready.go:92] pod "kube-apiserver-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:10:16.700891 1081367 pod_ready.go:81] duration metric: took 5.72892ms waiting for pod "kube-apiserver-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:16.700899 1081367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:16.700947 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-464644
	I0717 19:10:16.700956 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:16.700962 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:16.700968 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:16.703443 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:16.703465 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:16.703475 1081367 round_trippers.go:580]     Audit-Id: 5b5217d5-ea85-46f9-9e68-4d1a3e7df5a6
	I0717 19:10:16.703483 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:16.703490 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:16.703498 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:16.703507 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:16.703515 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:16 GMT
	I0717 19:10:16.703694 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-464644","namespace":"kube-system","uid":"6b598e8b-6c96-4014-b0a2-de37f107a0e9","resourceVersion":"430","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"323b8f41b30f0969feab8ff61a3ecabd","kubernetes.io/config.mirror":"323b8f41b30f0969feab8ff61a3ecabd","kubernetes.io/config.seen":"2023-07-17T19:09:54.339588566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0717 19:10:16.704207 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:16.704222 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:16.704231 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:16.704241 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:16.707017 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:16.707035 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:16.707042 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:16.707048 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:16 GMT
	I0717 19:10:16.707053 1081367 round_trippers.go:580]     Audit-Id: b47d5b83-c5d1-420a-bf2c-fe23b10da817
	I0717 19:10:16.707058 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:16.707063 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:16.707068 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:16.707212 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:16.707627 1081367 pod_ready.go:92] pod "kube-controller-manager-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:10:16.707648 1081367 pod_ready.go:81] duration metric: took 6.740161ms waiting for pod "kube-controller-manager-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:16.707663 1081367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qwsn5" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:16.707728 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qwsn5
	I0717 19:10:16.707739 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:16.707749 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:16.707756 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:16.710132 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:16.710151 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:16.710158 1081367 round_trippers.go:580]     Audit-Id: 35d657d2-76c5-4a63-a087-0882f29dc000
	I0717 19:10:16.710163 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:16.710168 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:16.710174 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:16.710179 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:16.710184 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:16 GMT
	I0717 19:10:16.710323 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qwsn5","generateName":"kube-proxy-","namespace":"kube-system","uid":"50e3f5e0-00d9-4412-b4de-649bc29733e9","resourceVersion":"412","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 19:10:16.710879 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:16.710899 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:16.710912 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:16.710921 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:16.713370 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:16.713390 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:16.713397 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:16 GMT
	I0717 19:10:16.713405 1081367 round_trippers.go:580]     Audit-Id: 2cf06ce1-6dd8-4b88-9fba-69719cd31aeb
	I0717 19:10:16.713410 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:16.713415 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:16.713420 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:16.713426 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:16.713713 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:16.714055 1081367 pod_ready.go:92] pod "kube-proxy-qwsn5" in "kube-system" namespace has status "Ready":"True"
	I0717 19:10:16.714070 1081367 pod_ready.go:81] duration metric: took 6.401238ms waiting for pod "kube-proxy-qwsn5" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:16.714080 1081367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:16.879584 1081367 request.go:628] Waited for 165.402594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-464644
	I0717 19:10:16.879655 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-464644
	I0717 19:10:16.879660 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:16.879668 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:16.879675 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:16.883088 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:16.883122 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:16.883131 1081367 round_trippers.go:580]     Audit-Id: a8b685b5-a793-4744-9faf-93b46b0590d8
	I0717 19:10:16.883139 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:16.883148 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:16.883156 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:16.883164 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:16.883172 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:16 GMT
	I0717 19:10:16.883317 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-464644","namespace":"kube-system","uid":"04e5660d-abb0-432a-861e-c5c242edfb98","resourceVersion":"431","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6435a6b37c43f83175753c4199c85407","kubernetes.io/config.mirror":"6435a6b37c43f83175753c4199c85407","kubernetes.io/config.seen":"2023-07-17T19:09:54.339590320Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0717 19:10:17.079170 1081367 request.go:628] Waited for 195.36326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:17.079252 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:17.079259 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:17.079270 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:17.079276 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:17.082860 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:17.082888 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:17.082896 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:17.082901 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:17 GMT
	I0717 19:10:17.082907 1081367 round_trippers.go:580]     Audit-Id: c00e7e69-0401-4798-b235-ab02fa8289e6
	I0717 19:10:17.082912 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:17.082918 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:17.082923 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:17.083116 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:17.083475 1081367 pod_ready.go:92] pod "kube-scheduler-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:10:17.083491 1081367 pod_ready.go:81] duration metric: took 369.404918ms waiting for pod "kube-scheduler-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:17.083502 1081367 pod_ready.go:38] duration metric: took 2.922189233s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:10:17.083521 1081367 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:10:17.083580 1081367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:10:17.101266 1081367 command_runner.go:130] > 1090
	I0717 19:10:17.101322 1081367 api_server.go:72] duration metric: took 8.369981508s to wait for apiserver process to appear ...
	I0717 19:10:17.101335 1081367 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:10:17.101360 1081367 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0717 19:10:17.110245 1081367 api_server.go:279] https://192.168.39.174:8443/healthz returned 200:
	ok
	I0717 19:10:17.110327 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/version
	I0717 19:10:17.110336 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:17.110350 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:17.110358 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:17.112356 1081367 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:10:17.112381 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:17.112392 1081367 round_trippers.go:580]     Audit-Id: 8644e4b7-e5fc-4782-b953-640e2b4e3974
	I0717 19:10:17.112402 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:17.112412 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:17.112420 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:17.112428 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:17.112434 1081367 round_trippers.go:580]     Content-Length: 263
	I0717 19:10:17.112439 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:17 GMT
	I0717 19:10:17.112486 1081367 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0717 19:10:17.112607 1081367 api_server.go:141] control plane version: v1.27.3
	I0717 19:10:17.112629 1081367 api_server.go:131] duration metric: took 11.286991ms to wait for apiserver health ...
	I0717 19:10:17.112639 1081367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:10:17.280168 1081367 request.go:628] Waited for 167.432304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:10:17.280246 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:10:17.280252 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:17.280262 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:17.280272 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:17.287052 1081367 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 19:10:17.287084 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:17.287094 1081367 round_trippers.go:580]     Audit-Id: 78b83625-2e8c-44c4-8c3a-6796a70b1b1f
	I0717 19:10:17.287102 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:17.287110 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:17.287118 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:17.287125 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:17.287133 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:17 GMT
	I0717 19:10:17.289282 1081367 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"448","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53997 chars]
	I0717 19:10:17.291841 1081367 system_pods.go:59] 8 kube-system pods found
	I0717 19:10:17.291876 1081367 system_pods.go:61] "coredns-5d78c9869d-wqj4s" [a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991] Running
	I0717 19:10:17.291885 1081367 system_pods.go:61] "etcd-multinode-464644" [b672d395-d32d-4198-b486-d9cff48d8b9a] Running
	I0717 19:10:17.291892 1081367 system_pods.go:61] "kindnet-2tp5c" [4e4881b0-4a20-4588-a87b-d2ba9c9b6939] Running
	I0717 19:10:17.291899 1081367 system_pods.go:61] "kube-apiserver-multinode-464644" [dd6e14e2-0b92-42b9-b6a2-1562c2c70903] Running
	I0717 19:10:17.291907 1081367 system_pods.go:61] "kube-controller-manager-multinode-464644" [6b598e8b-6c96-4014-b0a2-de37f107a0e9] Running
	I0717 19:10:17.291913 1081367 system_pods.go:61] "kube-proxy-qwsn5" [50e3f5e0-00d9-4412-b4de-649bc29733e9] Running
	I0717 19:10:17.291921 1081367 system_pods.go:61] "kube-scheduler-multinode-464644" [04e5660d-abb0-432a-861e-c5c242edfb98] Running
	I0717 19:10:17.291927 1081367 system_pods.go:61] "storage-provisioner" [bd46cf29-49d3-4c0a-908e-a323a525d8d5] Running
	I0717 19:10:17.291934 1081367 system_pods.go:74] duration metric: took 179.289265ms to wait for pod list to return data ...
	I0717 19:10:17.291944 1081367 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:10:17.479496 1081367 request.go:628] Waited for 187.455706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/default/serviceaccounts
	I0717 19:10:17.479583 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/default/serviceaccounts
	I0717 19:10:17.479588 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:17.479596 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:17.479602 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:17.483137 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:17.483164 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:17.483173 1081367 round_trippers.go:580]     Content-Length: 261
	I0717 19:10:17.483183 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:17 GMT
	I0717 19:10:17.483193 1081367 round_trippers.go:580]     Audit-Id: c32d03eb-70f0-4ed6-a410-73d099ef043b
	I0717 19:10:17.483202 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:17.483211 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:17.483218 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:17.483223 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:17.483246 1081367 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c937d5c3-8099-4596-ba93-f29feec4671e","resourceVersion":"341","creationTimestamp":"2023-07-17T19:10:07Z"}}]}
	I0717 19:10:17.483545 1081367 default_sa.go:45] found service account: "default"
	I0717 19:10:17.483567 1081367 default_sa.go:55] duration metric: took 191.616515ms for default service account to be created ...
	I0717 19:10:17.483578 1081367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:10:17.680136 1081367 request.go:628] Waited for 196.472401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:10:17.680213 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:10:17.680217 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:17.680226 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:17.680232 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:17.685630 1081367 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 19:10:17.685664 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:17.685676 1081367 round_trippers.go:580]     Audit-Id: 2b83cac3-bc40-4f4f-b2a0-94ab492d8729
	I0717 19:10:17.685685 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:17.685699 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:17.685704 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:17.685709 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:17.685715 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:17 GMT
	I0717 19:10:17.686479 1081367 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"448","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53997 chars]
	I0717 19:10:17.688202 1081367 system_pods.go:86] 8 kube-system pods found
	I0717 19:10:17.688242 1081367 system_pods.go:89] "coredns-5d78c9869d-wqj4s" [a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991] Running
	I0717 19:10:17.688247 1081367 system_pods.go:89] "etcd-multinode-464644" [b672d395-d32d-4198-b486-d9cff48d8b9a] Running
	I0717 19:10:17.688251 1081367 system_pods.go:89] "kindnet-2tp5c" [4e4881b0-4a20-4588-a87b-d2ba9c9b6939] Running
	I0717 19:10:17.688256 1081367 system_pods.go:89] "kube-apiserver-multinode-464644" [dd6e14e2-0b92-42b9-b6a2-1562c2c70903] Running
	I0717 19:10:17.688260 1081367 system_pods.go:89] "kube-controller-manager-multinode-464644" [6b598e8b-6c96-4014-b0a2-de37f107a0e9] Running
	I0717 19:10:17.688265 1081367 system_pods.go:89] "kube-proxy-qwsn5" [50e3f5e0-00d9-4412-b4de-649bc29733e9] Running
	I0717 19:10:17.688269 1081367 system_pods.go:89] "kube-scheduler-multinode-464644" [04e5660d-abb0-432a-861e-c5c242edfb98] Running
	I0717 19:10:17.688272 1081367 system_pods.go:89] "storage-provisioner" [bd46cf29-49d3-4c0a-908e-a323a525d8d5] Running
	I0717 19:10:17.688280 1081367 system_pods.go:126] duration metric: took 204.696513ms to wait for k8s-apps to be running ...
	I0717 19:10:17.688289 1081367 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:10:17.688354 1081367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:10:17.703899 1081367 system_svc.go:56] duration metric: took 15.597357ms WaitForService to wait for kubelet.
	I0717 19:10:17.703930 1081367 kubeadm.go:581] duration metric: took 8.9725891s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 19:10:17.703950 1081367 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:10:17.879486 1081367 request.go:628] Waited for 175.420716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes
	I0717 19:10:17.879567 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes
	I0717 19:10:17.879572 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:17.879581 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:17.879588 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:17.883676 1081367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:10:17.883709 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:17.883722 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:17.883730 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:17.883735 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:17.883741 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:17.883746 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:17 GMT
	I0717 19:10:17.883752 1081367 round_trippers.go:580]     Audit-Id: 06a61c62-538b-478b-a190-1be2ad2720fd
	I0717 19:10:17.883842 1081367 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I0717 19:10:17.884210 1081367 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:10:17.884232 1081367 node_conditions.go:123] node cpu capacity is 2
	I0717 19:10:17.884272 1081367 node_conditions.go:105] duration metric: took 180.317783ms to run NodePressure ...
	I0717 19:10:17.884284 1081367 start.go:228] waiting for startup goroutines ...
	I0717 19:10:17.884295 1081367 start.go:233] waiting for cluster config update ...
	I0717 19:10:17.884305 1081367 start.go:242] writing updated cluster config ...
	I0717 19:10:17.887163 1081367 out.go:177] 
	I0717 19:10:17.889416 1081367 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:10:17.889537 1081367 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/config.json ...
	I0717 19:10:17.892277 1081367 out.go:177] * Starting worker node multinode-464644-m02 in cluster multinode-464644
	I0717 19:10:17.894783 1081367 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:10:17.894830 1081367 cache.go:57] Caching tarball of preloaded images
	I0717 19:10:17.894985 1081367 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:10:17.895005 1081367 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:10:17.895141 1081367 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/config.json ...
	I0717 19:10:17.895402 1081367 start.go:365] acquiring machines lock for multinode-464644-m02: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:10:17.895544 1081367 start.go:369] acquired machines lock for "multinode-464644-m02" in 74.671µs
	I0717 19:10:17.895578 1081367 start.go:93] Provisioning new machine with config: &{Name:multinode-464644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-4
64644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 19:10:17.895684 1081367 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0717 19:10:17.897994 1081367 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 19:10:17.898111 1081367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:10:17.898160 1081367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:10:17.913838 1081367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0717 19:10:17.914353 1081367 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:10:17.914951 1081367 main.go:141] libmachine: Using API Version  1
	I0717 19:10:17.914974 1081367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:10:17.915346 1081367 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:10:17.915588 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetMachineName
	I0717 19:10:17.915785 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:10:17.915967 1081367 start.go:159] libmachine.API.Create for "multinode-464644" (driver="kvm2")
	I0717 19:10:17.915995 1081367 client.go:168] LocalClient.Create starting
	I0717 19:10:17.916032 1081367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem
	I0717 19:10:17.916066 1081367 main.go:141] libmachine: Decoding PEM data...
	I0717 19:10:17.916086 1081367 main.go:141] libmachine: Parsing certificate...
	I0717 19:10:17.916147 1081367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem
	I0717 19:10:17.916167 1081367 main.go:141] libmachine: Decoding PEM data...
	I0717 19:10:17.916178 1081367 main.go:141] libmachine: Parsing certificate...
	I0717 19:10:17.916195 1081367 main.go:141] libmachine: Running pre-create checks...
	I0717 19:10:17.916204 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .PreCreateCheck
	I0717 19:10:17.916425 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetConfigRaw
	I0717 19:10:17.916889 1081367 main.go:141] libmachine: Creating machine...
	I0717 19:10:17.916904 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .Create
	I0717 19:10:17.917088 1081367 main.go:141] libmachine: (multinode-464644-m02) Creating KVM machine...
	I0717 19:10:17.918642 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found existing default KVM network
	I0717 19:10:17.918807 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found existing private KVM network mk-multinode-464644
	I0717 19:10:17.919026 1081367 main.go:141] libmachine: (multinode-464644-m02) Setting up store path in /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02 ...
	I0717 19:10:17.919054 1081367 main.go:141] libmachine: (multinode-464644-m02) Building disk image from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 19:10:17.919164 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:17.919022 1081747 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:10:17.919249 1081367 main.go:141] libmachine: (multinode-464644-m02) Downloading /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0717 19:10:18.155081 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:18.154893 1081747 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa...
	I0717 19:10:18.370741 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:18.370564 1081747 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/multinode-464644-m02.rawdisk...
	I0717 19:10:18.370783 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Writing magic tar header
	I0717 19:10:18.370798 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Writing SSH key tar header
	I0717 19:10:18.370812 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:18.370682 1081747 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02 ...
	I0717 19:10:18.370826 1081367 main.go:141] libmachine: (multinode-464644-m02) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02 (perms=drwx------)
	I0717 19:10:18.370842 1081367 main.go:141] libmachine: (multinode-464644-m02) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines (perms=drwxr-xr-x)
	I0717 19:10:18.370853 1081367 main.go:141] libmachine: (multinode-464644-m02) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube (perms=drwxr-xr-x)
	I0717 19:10:18.370871 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02
	I0717 19:10:18.370897 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines
	I0717 19:10:18.370918 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:10:18.370933 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725
	I0717 19:10:18.370951 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 19:10:18.370968 1081367 main.go:141] libmachine: (multinode-464644-m02) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725 (perms=drwxrwxr-x)
	I0717 19:10:18.370986 1081367 main.go:141] libmachine: (multinode-464644-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 19:10:18.370998 1081367 main.go:141] libmachine: (multinode-464644-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 19:10:18.371017 1081367 main.go:141] libmachine: (multinode-464644-m02) Creating domain...
	I0717 19:10:18.371073 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Checking permissions on dir: /home/jenkins
	I0717 19:10:18.371109 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Checking permissions on dir: /home
	I0717 19:10:18.371126 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Skipping /home - not owner
	I0717 19:10:18.372247 1081367 main.go:141] libmachine: (multinode-464644-m02) define libvirt domain using xml: 
	I0717 19:10:18.372280 1081367 main.go:141] libmachine: (multinode-464644-m02) <domain type='kvm'>
	I0717 19:10:18.372293 1081367 main.go:141] libmachine: (multinode-464644-m02)   <name>multinode-464644-m02</name>
	I0717 19:10:18.372306 1081367 main.go:141] libmachine: (multinode-464644-m02)   <memory unit='MiB'>2200</memory>
	I0717 19:10:18.372320 1081367 main.go:141] libmachine: (multinode-464644-m02)   <vcpu>2</vcpu>
	I0717 19:10:18.372329 1081367 main.go:141] libmachine: (multinode-464644-m02)   <features>
	I0717 19:10:18.372343 1081367 main.go:141] libmachine: (multinode-464644-m02)     <acpi/>
	I0717 19:10:18.372356 1081367 main.go:141] libmachine: (multinode-464644-m02)     <apic/>
	I0717 19:10:18.372369 1081367 main.go:141] libmachine: (multinode-464644-m02)     <pae/>
	I0717 19:10:18.372380 1081367 main.go:141] libmachine: (multinode-464644-m02)     
	I0717 19:10:18.372488 1081367 main.go:141] libmachine: (multinode-464644-m02)   </features>
	I0717 19:10:18.372538 1081367 main.go:141] libmachine: (multinode-464644-m02)   <cpu mode='host-passthrough'>
	I0717 19:10:18.372553 1081367 main.go:141] libmachine: (multinode-464644-m02)   
	I0717 19:10:18.372566 1081367 main.go:141] libmachine: (multinode-464644-m02)   </cpu>
	I0717 19:10:18.372582 1081367 main.go:141] libmachine: (multinode-464644-m02)   <os>
	I0717 19:10:18.372597 1081367 main.go:141] libmachine: (multinode-464644-m02)     <type>hvm</type>
	I0717 19:10:18.372620 1081367 main.go:141] libmachine: (multinode-464644-m02)     <boot dev='cdrom'/>
	I0717 19:10:18.372633 1081367 main.go:141] libmachine: (multinode-464644-m02)     <boot dev='hd'/>
	I0717 19:10:18.372672 1081367 main.go:141] libmachine: (multinode-464644-m02)     <bootmenu enable='no'/>
	I0717 19:10:18.372699 1081367 main.go:141] libmachine: (multinode-464644-m02)   </os>
	I0717 19:10:18.372733 1081367 main.go:141] libmachine: (multinode-464644-m02)   <devices>
	I0717 19:10:18.372751 1081367 main.go:141] libmachine: (multinode-464644-m02)     <disk type='file' device='cdrom'>
	I0717 19:10:18.372766 1081367 main.go:141] libmachine: (multinode-464644-m02)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/boot2docker.iso'/>
	I0717 19:10:18.372785 1081367 main.go:141] libmachine: (multinode-464644-m02)       <target dev='hdc' bus='scsi'/>
	I0717 19:10:18.372801 1081367 main.go:141] libmachine: (multinode-464644-m02)       <readonly/>
	I0717 19:10:18.372815 1081367 main.go:141] libmachine: (multinode-464644-m02)     </disk>
	I0717 19:10:18.372830 1081367 main.go:141] libmachine: (multinode-464644-m02)     <disk type='file' device='disk'>
	I0717 19:10:18.372848 1081367 main.go:141] libmachine: (multinode-464644-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 19:10:18.372866 1081367 main.go:141] libmachine: (multinode-464644-m02)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/multinode-464644-m02.rawdisk'/>
	I0717 19:10:18.372875 1081367 main.go:141] libmachine: (multinode-464644-m02)       <target dev='hda' bus='virtio'/>
	I0717 19:10:18.372889 1081367 main.go:141] libmachine: (multinode-464644-m02)     </disk>
	I0717 19:10:18.372902 1081367 main.go:141] libmachine: (multinode-464644-m02)     <interface type='network'>
	I0717 19:10:18.372918 1081367 main.go:141] libmachine: (multinode-464644-m02)       <source network='mk-multinode-464644'/>
	I0717 19:10:18.372935 1081367 main.go:141] libmachine: (multinode-464644-m02)       <model type='virtio'/>
	I0717 19:10:18.372948 1081367 main.go:141] libmachine: (multinode-464644-m02)     </interface>
	I0717 19:10:18.372960 1081367 main.go:141] libmachine: (multinode-464644-m02)     <interface type='network'>
	I0717 19:10:18.372972 1081367 main.go:141] libmachine: (multinode-464644-m02)       <source network='default'/>
	I0717 19:10:18.372984 1081367 main.go:141] libmachine: (multinode-464644-m02)       <model type='virtio'/>
	I0717 19:10:18.373000 1081367 main.go:141] libmachine: (multinode-464644-m02)     </interface>
	I0717 19:10:18.373018 1081367 main.go:141] libmachine: (multinode-464644-m02)     <serial type='pty'>
	I0717 19:10:18.373032 1081367 main.go:141] libmachine: (multinode-464644-m02)       <target port='0'/>
	I0717 19:10:18.373044 1081367 main.go:141] libmachine: (multinode-464644-m02)     </serial>
	I0717 19:10:18.373057 1081367 main.go:141] libmachine: (multinode-464644-m02)     <console type='pty'>
	I0717 19:10:18.373067 1081367 main.go:141] libmachine: (multinode-464644-m02)       <target type='serial' port='0'/>
	I0717 19:10:18.373080 1081367 main.go:141] libmachine: (multinode-464644-m02)     </console>
	I0717 19:10:18.373097 1081367 main.go:141] libmachine: (multinode-464644-m02)     <rng model='virtio'>
	I0717 19:10:18.373112 1081367 main.go:141] libmachine: (multinode-464644-m02)       <backend model='random'>/dev/random</backend>
	I0717 19:10:18.373123 1081367 main.go:141] libmachine: (multinode-464644-m02)     </rng>
	I0717 19:10:18.373137 1081367 main.go:141] libmachine: (multinode-464644-m02)     
	I0717 19:10:18.373149 1081367 main.go:141] libmachine: (multinode-464644-m02)     
	I0717 19:10:18.373161 1081367 main.go:141] libmachine: (multinode-464644-m02)   </devices>
	I0717 19:10:18.373178 1081367 main.go:141] libmachine: (multinode-464644-m02) </domain>
	I0717 19:10:18.373196 1081367 main.go:141] libmachine: (multinode-464644-m02) 
	I0717 19:10:18.380734 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:21:98:74 in network default
	I0717 19:10:18.381398 1081367 main.go:141] libmachine: (multinode-464644-m02) Ensuring networks are active...
	I0717 19:10:18.381456 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:18.382268 1081367 main.go:141] libmachine: (multinode-464644-m02) Ensuring network default is active
	I0717 19:10:18.382700 1081367 main.go:141] libmachine: (multinode-464644-m02) Ensuring network mk-multinode-464644 is active
	I0717 19:10:18.382962 1081367 main.go:141] libmachine: (multinode-464644-m02) Getting domain xml...
	I0717 19:10:18.383676 1081367 main.go:141] libmachine: (multinode-464644-m02) Creating domain...
	I0717 19:10:19.699688 1081367 main.go:141] libmachine: (multinode-464644-m02) Waiting to get IP...
	I0717 19:10:19.700582 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:19.701056 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:19.701097 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:19.701033 1081747 retry.go:31] will retry after 250.983725ms: waiting for machine to come up
	I0717 19:10:19.953862 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:19.954395 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:19.954426 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:19.954333 1081747 retry.go:31] will retry after 357.071126ms: waiting for machine to come up
	I0717 19:10:20.313163 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:20.313743 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:20.313774 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:20.313720 1081747 retry.go:31] will retry after 342.834691ms: waiting for machine to come up
	I0717 19:10:20.658441 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:20.658932 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:20.658961 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:20.658876 1081747 retry.go:31] will retry after 527.667513ms: waiting for machine to come up
	I0717 19:10:21.188735 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:21.189271 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:21.189296 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:21.189213 1081747 retry.go:31] will retry after 638.623241ms: waiting for machine to come up
	I0717 19:10:21.829938 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:21.830341 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:21.830373 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:21.830276 1081747 retry.go:31] will retry after 709.460104ms: waiting for machine to come up
	I0717 19:10:22.541314 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:22.541920 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:22.541945 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:22.541862 1081747 retry.go:31] will retry after 1.122030894s: waiting for machine to come up
	I0717 19:10:23.665755 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:23.666276 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:23.666312 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:23.666204 1081747 retry.go:31] will retry after 1.204092097s: waiting for machine to come up
	I0717 19:10:24.872808 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:24.873309 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:24.873347 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:24.873237 1081747 retry.go:31] will retry after 1.381326369s: waiting for machine to come up
	I0717 19:10:26.257020 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:26.257468 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:26.257624 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:26.257401 1081747 retry.go:31] will retry after 2.145538213s: waiting for machine to come up
	I0717 19:10:28.404569 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:28.405005 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:28.405049 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:28.404988 1081747 retry.go:31] will retry after 2.329364365s: waiting for machine to come up
	I0717 19:10:30.737609 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:30.738065 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:30.738112 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:30.737981 1081747 retry.go:31] will retry after 3.087815302s: waiting for machine to come up
	I0717 19:10:33.827130 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:33.827702 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:33.827738 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:33.827630 1081747 retry.go:31] will retry after 2.928333958s: waiting for machine to come up
	I0717 19:10:36.758818 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:36.759188 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find current IP address of domain multinode-464644-m02 in network mk-multinode-464644
	I0717 19:10:36.759214 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | I0717 19:10:36.759132 1081747 retry.go:31] will retry after 3.461997711s: waiting for machine to come up
	I0717 19:10:40.222553 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:40.223060 1081367 main.go:141] libmachine: (multinode-464644-m02) Found IP for machine: 192.168.39.49
	I0717 19:10:40.223087 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has current primary IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:40.223105 1081367 main.go:141] libmachine: (multinode-464644-m02) Reserving static IP address...
	I0717 19:10:40.223536 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find host DHCP lease matching {name: "multinode-464644-m02", mac: "52:54:00:2d:46:84", ip: "192.168.39.49"} in network mk-multinode-464644
	I0717 19:10:40.309909 1081367 main.go:141] libmachine: (multinode-464644-m02) Reserved static IP address: 192.168.39.49
	I0717 19:10:40.309953 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Getting to WaitForSSH function...
	I0717 19:10:40.309964 1081367 main.go:141] libmachine: (multinode-464644-m02) Waiting for SSH to be available...
	I0717 19:10:40.313057 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:40.313351 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644
	I0717 19:10:40.313390 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | unable to find defined IP address of network mk-multinode-464644 interface with MAC address 52:54:00:2d:46:84
	I0717 19:10:40.313579 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Using SSH client type: external
	I0717 19:10:40.313614 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa (-rw-------)
	I0717 19:10:40.313647 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:10:40.313665 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | About to run SSH command:
	I0717 19:10:40.313682 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | exit 0
	I0717 19:10:40.318227 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | SSH cmd err, output: exit status 255: 
	I0717 19:10:40.318255 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 19:10:40.318267 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | command : exit 0
	I0717 19:10:40.318277 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | err     : exit status 255
	I0717 19:10:40.318292 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | output  : 
	I0717 19:10:43.318595 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Getting to WaitForSSH function...
	I0717 19:10:43.321509 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.322175 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:43.322211 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.322488 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Using SSH client type: external
	I0717 19:10:43.322517 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa (-rw-------)
	I0717 19:10:43.322547 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:10:43.322565 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | About to run SSH command:
	I0717 19:10:43.322582 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | exit 0
	I0717 19:10:43.409887 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | SSH cmd err, output: <nil>: 
	I0717 19:10:43.410149 1081367 main.go:141] libmachine: (multinode-464644-m02) KVM machine creation complete!
	I0717 19:10:43.410515 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetConfigRaw
	I0717 19:10:43.411169 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:10:43.411373 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:10:43.411568 1081367 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 19:10:43.411589 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetState
	I0717 19:10:43.413102 1081367 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 19:10:43.413118 1081367 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 19:10:43.413125 1081367 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 19:10:43.413132 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:10:43.415327 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.415836 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:43.415873 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.416057 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:10:43.416296 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:43.416500 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:43.416694 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:10:43.416885 1081367 main.go:141] libmachine: Using SSH client type: native
	I0717 19:10:43.417619 1081367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:10:43.417637 1081367 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 19:10:43.533037 1081367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:10:43.533074 1081367 main.go:141] libmachine: Detecting the provisioner...
	I0717 19:10:43.533089 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:10:43.535954 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.536385 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:43.536425 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.536565 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:10:43.536803 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:43.537000 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:43.537105 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:10:43.537230 1081367 main.go:141] libmachine: Using SSH client type: native
	I0717 19:10:43.537693 1081367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:10:43.537713 1081367 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 19:10:43.659233 1081367 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0717 19:10:43.659347 1081367 main.go:141] libmachine: found compatible host: buildroot
	I0717 19:10:43.659361 1081367 main.go:141] libmachine: Provisioning with buildroot...
	I0717 19:10:43.659376 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetMachineName
	I0717 19:10:43.659719 1081367 buildroot.go:166] provisioning hostname "multinode-464644-m02"
	I0717 19:10:43.659756 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetMachineName
	I0717 19:10:43.660005 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:10:43.662962 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.663281 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:43.663317 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.663523 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:10:43.663738 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:43.663948 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:43.664112 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:10:43.664316 1081367 main.go:141] libmachine: Using SSH client type: native
	I0717 19:10:43.664726 1081367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:10:43.664740 1081367 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-464644-m02 && echo "multinode-464644-m02" | sudo tee /etc/hostname
	I0717 19:10:43.794214 1081367 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-464644-m02
	
	I0717 19:10:43.794253 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:10:43.797349 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.797839 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:43.797883 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.798081 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:10:43.798318 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:43.798473 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:43.798646 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:10:43.798859 1081367 main.go:141] libmachine: Using SSH client type: native
	I0717 19:10:43.799298 1081367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:10:43.799326 1081367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-464644-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-464644-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-464644-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:10:43.926576 1081367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:10:43.926613 1081367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:10:43.926642 1081367 buildroot.go:174] setting up certificates
	I0717 19:10:43.926657 1081367 provision.go:83] configureAuth start
	I0717 19:10:43.926676 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetMachineName
	I0717 19:10:43.927098 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetIP
	I0717 19:10:43.930264 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.930718 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:43.930752 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.931065 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:10:43.933602 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.934146 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:43.934195 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:43.934355 1081367 provision.go:138] copyHostCerts
	I0717 19:10:43.934394 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:10:43.934427 1081367 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:10:43.934434 1081367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:10:43.934502 1081367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:10:43.934624 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:10:43.934647 1081367 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:10:43.934652 1081367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:10:43.934680 1081367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:10:43.934728 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:10:43.934750 1081367 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:10:43.934753 1081367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:10:43.934774 1081367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:10:43.934817 1081367 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.multinode-464644-m02 san=[192.168.39.49 192.168.39.49 localhost 127.0.0.1 minikube multinode-464644-m02]
	I0717 19:10:44.151957 1081367 provision.go:172] copyRemoteCerts
	I0717 19:10:44.152035 1081367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:10:44.152063 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:10:44.155222 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.155772 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:44.155819 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.156021 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:10:44.156308 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:44.156561 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:10:44.156751 1081367 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa Username:docker}
	I0717 19:10:44.243103 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 19:10:44.243196 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:10:44.268945 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 19:10:44.269037 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:10:44.295169 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 19:10:44.295268 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0717 19:10:44.322141 1081367 provision.go:86] duration metric: configureAuth took 395.465472ms
	I0717 19:10:44.322173 1081367 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:10:44.322386 1081367 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:10:44.322496 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:10:44.325376 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.326015 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:44.326641 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:10:44.326883 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.328023 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:44.328299 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:44.328507 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:10:44.328748 1081367 main.go:141] libmachine: Using SSH client type: native
	I0717 19:10:44.329198 1081367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:10:44.329223 1081367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:10:44.662385 1081367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:10:44.662422 1081367 main.go:141] libmachine: Checking connection to Docker...
	I0717 19:10:44.662437 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetURL
	I0717 19:10:44.664007 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | Using libvirt version 6000000
	I0717 19:10:44.666494 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.666976 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:44.667008 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.667272 1081367 main.go:141] libmachine: Docker is up and running!
	I0717 19:10:44.667294 1081367 main.go:141] libmachine: Reticulating splines...
	I0717 19:10:44.667303 1081367 client.go:171] LocalClient.Create took 26.75129607s
	I0717 19:10:44.667332 1081367 start.go:167] duration metric: libmachine.API.Create for "multinode-464644" took 26.751363721s
	I0717 19:10:44.667346 1081367 start.go:300] post-start starting for "multinode-464644-m02" (driver="kvm2")
	I0717 19:10:44.667360 1081367 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:10:44.667389 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:10:44.667735 1081367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:10:44.667776 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:10:44.670227 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.670695 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:44.670733 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.670934 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:10:44.671159 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:44.671369 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:10:44.671552 1081367 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa Username:docker}
	I0717 19:10:44.760716 1081367 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:10:44.765020 1081367 command_runner.go:130] > NAME=Buildroot
	I0717 19:10:44.765044 1081367 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0717 19:10:44.765049 1081367 command_runner.go:130] > ID=buildroot
	I0717 19:10:44.765055 1081367 command_runner.go:130] > VERSION_ID=2021.02.12
	I0717 19:10:44.765059 1081367 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0717 19:10:44.765200 1081367 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:10:44.765255 1081367 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:10:44.765354 1081367 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:10:44.765450 1081367 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:10:44.765464 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> /etc/ssl/certs/10689542.pem
	I0717 19:10:44.765608 1081367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:10:44.775807 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:10:44.800830 1081367 start.go:303] post-start completed in 133.467844ms
	I0717 19:10:44.800907 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetConfigRaw
	I0717 19:10:44.801707 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetIP
	I0717 19:10:44.804982 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.805431 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:44.805464 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.805722 1081367 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/config.json ...
	I0717 19:10:44.805939 1081367 start.go:128] duration metric: createHost completed in 26.910243009s
	I0717 19:10:44.805971 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:10:44.809123 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.809520 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:44.809576 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.809811 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:10:44.810090 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:44.810304 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:44.810444 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:10:44.810644 1081367 main.go:141] libmachine: Using SSH client type: native
	I0717 19:10:44.811273 1081367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:10:44.811296 1081367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:10:44.931162 1081367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689621044.914401742
	
	I0717 19:10:44.931187 1081367 fix.go:206] guest clock: 1689621044.914401742
	I0717 19:10:44.931195 1081367 fix.go:219] Guest: 2023-07-17 19:10:44.914401742 +0000 UTC Remote: 2023-07-17 19:10:44.805954936 +0000 UTC m=+100.628522710 (delta=108.446806ms)
	I0717 19:10:44.931212 1081367 fix.go:190] guest clock delta is within tolerance: 108.446806ms
	I0717 19:10:44.931218 1081367 start.go:83] releasing machines lock for "multinode-464644-m02", held for 27.035656538s
	I0717 19:10:44.931244 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:10:44.931671 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetIP
	I0717 19:10:44.935085 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.935602 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:44.935642 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.939991 1081367 out.go:177] * Found network options:
	I0717 19:10:44.942750 1081367 out.go:177]   - NO_PROXY=192.168.39.174
	W0717 19:10:44.944877 1081367 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 19:10:44.944931 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:10:44.945995 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:10:44.946269 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:10:44.946391 1081367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:10:44.946435 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	W0717 19:10:44.946541 1081367 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 19:10:44.946622 1081367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:10:44.946642 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:10:44.949933 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.950008 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.950465 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:44.950507 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.950534 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:44.950549 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:44.950649 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:10:44.950976 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:44.951007 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:10:44.951275 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:10:44.951276 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:10:44.951622 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:10:44.951658 1081367 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa Username:docker}
	I0717 19:10:44.951814 1081367 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa Username:docker}
	I0717 19:10:45.204826 1081367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:10:45.204827 1081367 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 19:10:45.211869 1081367 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 19:10:45.211926 1081367 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:10:45.212019 1081367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:10:45.227932 1081367 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0717 19:10:45.228099 1081367 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:10:45.228122 1081367 start.go:469] detecting cgroup driver to use...
	I0717 19:10:45.228197 1081367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:10:45.245276 1081367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:10:45.258659 1081367 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:10:45.258735 1081367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:10:45.272403 1081367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:10:45.285939 1081367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:10:45.392456 1081367 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0717 19:10:45.392540 1081367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:10:45.514895 1081367 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0717 19:10:45.514937 1081367 docker.go:212] disabling docker service ...
	I0717 19:10:45.515003 1081367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:10:45.529880 1081367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:10:45.541391 1081367 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0717 19:10:45.542298 1081367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:10:45.654834 1081367 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0717 19:10:45.654939 1081367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:10:45.768768 1081367 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0717 19:10:45.768802 1081367 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0717 19:10:45.768855 1081367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:10:45.782057 1081367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:10:45.800360 1081367 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 19:10:45.800924 1081367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:10:45.800990 1081367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:10:45.810758 1081367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:10:45.810848 1081367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:10:45.821392 1081367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:10:45.832121 1081367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:10:45.842995 1081367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:10:45.854262 1081367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:10:45.863529 1081367 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:10:45.863597 1081367 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:10:45.863647 1081367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:10:45.877812 1081367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:10:45.887545 1081367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:10:45.998193 1081367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:10:46.173271 1081367 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:10:46.173363 1081367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:10:46.182157 1081367 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 19:10:46.182185 1081367 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 19:10:46.182192 1081367 command_runner.go:130] > Device: 16h/22d	Inode: 713         Links: 1
	I0717 19:10:46.182199 1081367 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:10:46.182204 1081367 command_runner.go:130] > Access: 2023-07-17 19:10:46.141275159 +0000
	I0717 19:10:46.182209 1081367 command_runner.go:130] > Modify: 2023-07-17 19:10:46.141275159 +0000
	I0717 19:10:46.182214 1081367 command_runner.go:130] > Change: 2023-07-17 19:10:46.141275159 +0000
	I0717 19:10:46.182219 1081367 command_runner.go:130] >  Birth: -
	I0717 19:10:46.182898 1081367 start.go:537] Will wait 60s for crictl version
	I0717 19:10:46.182993 1081367 ssh_runner.go:195] Run: which crictl
	I0717 19:10:46.188154 1081367 command_runner.go:130] > /usr/bin/crictl
	I0717 19:10:46.188307 1081367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:10:46.220234 1081367 command_runner.go:130] > Version:  0.1.0
	I0717 19:10:46.220265 1081367 command_runner.go:130] > RuntimeName:  cri-o
	I0717 19:10:46.220277 1081367 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0717 19:10:46.220286 1081367 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0717 19:10:46.220305 1081367 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:10:46.220378 1081367 ssh_runner.go:195] Run: crio --version
	I0717 19:10:46.267874 1081367 command_runner.go:130] > crio version 1.24.1
	I0717 19:10:46.267903 1081367 command_runner.go:130] > Version:          1.24.1
	I0717 19:10:46.267920 1081367 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 19:10:46.267927 1081367 command_runner.go:130] > GitTreeState:     dirty
	I0717 19:10:46.267937 1081367 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 19:10:46.267945 1081367 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 19:10:46.267952 1081367 command_runner.go:130] > Compiler:         gc
	I0717 19:10:46.267959 1081367 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:10:46.267968 1081367 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:10:46.267980 1081367 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:10:46.267984 1081367 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:10:46.267989 1081367 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:10:46.268090 1081367 ssh_runner.go:195] Run: crio --version
	I0717 19:10:46.320441 1081367 command_runner.go:130] > crio version 1.24.1
	I0717 19:10:46.320474 1081367 command_runner.go:130] > Version:          1.24.1
	I0717 19:10:46.320486 1081367 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 19:10:46.320493 1081367 command_runner.go:130] > GitTreeState:     dirty
	I0717 19:10:46.320512 1081367 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 19:10:46.320520 1081367 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 19:10:46.320527 1081367 command_runner.go:130] > Compiler:         gc
	I0717 19:10:46.320552 1081367 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:10:46.320561 1081367 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:10:46.320571 1081367 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:10:46.320582 1081367 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:10:46.320588 1081367 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:10:46.325262 1081367 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:10:46.327717 1081367 out.go:177]   - env NO_PROXY=192.168.39.174
	I0717 19:10:46.329897 1081367 main.go:141] libmachine: (multinode-464644-m02) Calling .GetIP
	I0717 19:10:46.333216 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:46.333636 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:10:46.333677 1081367 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:10:46.333928 1081367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:10:46.339195 1081367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:10:46.354935 1081367 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644 for IP: 192.168.39.49
	I0717 19:10:46.354994 1081367 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:10:46.355160 1081367 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:10:46.355208 1081367 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:10:46.355222 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 19:10:46.355236 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 19:10:46.355248 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 19:10:46.355260 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 19:10:46.355313 1081367 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:10:46.355339 1081367 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:10:46.355348 1081367 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:10:46.355367 1081367 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:10:46.355390 1081367 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:10:46.355412 1081367 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:10:46.355487 1081367 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:10:46.355533 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem -> /usr/share/ca-certificates/1068954.pem
	I0717 19:10:46.355554 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> /usr/share/ca-certificates/10689542.pem
	I0717 19:10:46.355570 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:10:46.356011 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:10:46.385332 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:10:46.414438 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:10:46.442299 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:10:46.468608 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:10:46.493910 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:10:46.519837 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:10:46.545779 1081367 ssh_runner.go:195] Run: openssl version
	I0717 19:10:46.551852 1081367 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0717 19:10:46.552151 1081367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:10:46.562610 1081367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:10:46.567918 1081367 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:10:46.568054 1081367 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:10:46.568144 1081367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:10:46.574174 1081367 command_runner.go:130] > 51391683
	I0717 19:10:46.574562 1081367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:10:46.584846 1081367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:10:46.597107 1081367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:10:46.602593 1081367 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:10:46.602632 1081367 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:10:46.602682 1081367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:10:46.609166 1081367 command_runner.go:130] > 3ec20f2e
	I0717 19:10:46.609408 1081367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:10:46.620045 1081367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:10:46.630588 1081367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:10:46.635722 1081367 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:10:46.635820 1081367 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:10:46.635887 1081367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:10:46.642375 1081367 command_runner.go:130] > b5213941
	I0717 19:10:46.642663 1081367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:10:46.653435 1081367 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:10:46.659289 1081367 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 19:10:46.659387 1081367 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 19:10:46.659495 1081367 ssh_runner.go:195] Run: crio config
	I0717 19:10:46.716757 1081367 command_runner.go:130] ! time="2023-07-17 19:10:46.703187186Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0717 19:10:46.716825 1081367 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 19:10:46.729364 1081367 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 19:10:46.729390 1081367 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 19:10:46.729396 1081367 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 19:10:46.729401 1081367 command_runner.go:130] > #
	I0717 19:10:46.729407 1081367 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 19:10:46.729413 1081367 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 19:10:46.729419 1081367 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 19:10:46.729426 1081367 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 19:10:46.729431 1081367 command_runner.go:130] > # reload'.
	I0717 19:10:46.729437 1081367 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 19:10:46.729443 1081367 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 19:10:46.729453 1081367 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 19:10:46.729460 1081367 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 19:10:46.729466 1081367 command_runner.go:130] > [crio]
	I0717 19:10:46.729472 1081367 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 19:10:46.729477 1081367 command_runner.go:130] > # containers images, in this directory.
	I0717 19:10:46.729484 1081367 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 19:10:46.729493 1081367 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 19:10:46.729500 1081367 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 19:10:46.729506 1081367 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 19:10:46.729512 1081367 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 19:10:46.729521 1081367 command_runner.go:130] > storage_driver = "overlay"
	I0717 19:10:46.729527 1081367 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 19:10:46.729538 1081367 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 19:10:46.729543 1081367 command_runner.go:130] > storage_option = [
	I0717 19:10:46.729548 1081367 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 19:10:46.729554 1081367 command_runner.go:130] > ]
	I0717 19:10:46.729583 1081367 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 19:10:46.729589 1081367 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 19:10:46.729594 1081367 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 19:10:46.729599 1081367 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 19:10:46.729606 1081367 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 19:10:46.729610 1081367 command_runner.go:130] > # always happen on a node reboot
	I0717 19:10:46.729617 1081367 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 19:10:46.729623 1081367 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 19:10:46.729631 1081367 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 19:10:46.729640 1081367 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 19:10:46.729648 1081367 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 19:10:46.729656 1081367 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 19:10:46.729665 1081367 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 19:10:46.729670 1081367 command_runner.go:130] > # internal_wipe = true
	I0717 19:10:46.729676 1081367 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 19:10:46.729685 1081367 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 19:10:46.729691 1081367 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 19:10:46.729698 1081367 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 19:10:46.729704 1081367 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 19:10:46.729710 1081367 command_runner.go:130] > [crio.api]
	I0717 19:10:46.729716 1081367 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 19:10:46.729721 1081367 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 19:10:46.729726 1081367 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 19:10:46.729733 1081367 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 19:10:46.729739 1081367 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 19:10:46.729744 1081367 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 19:10:46.729750 1081367 command_runner.go:130] > # stream_port = "0"
	I0717 19:10:46.729758 1081367 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 19:10:46.729764 1081367 command_runner.go:130] > # stream_enable_tls = false
	I0717 19:10:46.729770 1081367 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 19:10:46.729776 1081367 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 19:10:46.729782 1081367 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 19:10:46.729794 1081367 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 19:10:46.729800 1081367 command_runner.go:130] > # minutes.
	I0717 19:10:46.729804 1081367 command_runner.go:130] > # stream_tls_cert = ""
	I0717 19:10:46.729813 1081367 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 19:10:46.729819 1081367 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 19:10:46.729825 1081367 command_runner.go:130] > # stream_tls_key = ""
	I0717 19:10:46.729831 1081367 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 19:10:46.729839 1081367 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 19:10:46.729845 1081367 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 19:10:46.729849 1081367 command_runner.go:130] > # stream_tls_ca = ""
	I0717 19:10:46.729858 1081367 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:10:46.729862 1081367 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 19:10:46.729872 1081367 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:10:46.729877 1081367 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 19:10:46.729896 1081367 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 19:10:46.729904 1081367 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 19:10:46.729907 1081367 command_runner.go:130] > [crio.runtime]
	I0717 19:10:46.729915 1081367 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 19:10:46.729920 1081367 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 19:10:46.729926 1081367 command_runner.go:130] > # "nofile=1024:2048"
	I0717 19:10:46.729932 1081367 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 19:10:46.729936 1081367 command_runner.go:130] > # default_ulimits = [
	I0717 19:10:46.729942 1081367 command_runner.go:130] > # ]
	I0717 19:10:46.729948 1081367 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 19:10:46.729954 1081367 command_runner.go:130] > # no_pivot = false
	I0717 19:10:46.729960 1081367 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 19:10:46.729968 1081367 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 19:10:46.729973 1081367 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 19:10:46.729990 1081367 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 19:10:46.729997 1081367 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 19:10:46.730003 1081367 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:10:46.730010 1081367 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 19:10:46.730014 1081367 command_runner.go:130] > # Cgroup setting for conmon
	I0717 19:10:46.730023 1081367 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 19:10:46.730027 1081367 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 19:10:46.730034 1081367 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 19:10:46.730042 1081367 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 19:10:46.730051 1081367 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:10:46.730057 1081367 command_runner.go:130] > conmon_env = [
	I0717 19:10:46.730063 1081367 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 19:10:46.730066 1081367 command_runner.go:130] > ]
	I0717 19:10:46.730072 1081367 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 19:10:46.730079 1081367 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 19:10:46.730084 1081367 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 19:10:46.730091 1081367 command_runner.go:130] > # default_env = [
	I0717 19:10:46.730094 1081367 command_runner.go:130] > # ]
	I0717 19:10:46.730099 1081367 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 19:10:46.730107 1081367 command_runner.go:130] > # selinux = false
	I0717 19:10:46.730112 1081367 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 19:10:46.730119 1081367 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 19:10:46.730126 1081367 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 19:10:46.730132 1081367 command_runner.go:130] > # seccomp_profile = ""
	I0717 19:10:46.730141 1081367 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 19:10:46.730146 1081367 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 19:10:46.730154 1081367 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 19:10:46.730159 1081367 command_runner.go:130] > # which might increase security.
	I0717 19:10:46.730165 1081367 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 19:10:46.730171 1081367 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 19:10:46.730180 1081367 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 19:10:46.730185 1081367 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 19:10:46.730194 1081367 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 19:10:46.730199 1081367 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:10:46.730208 1081367 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 19:10:46.730213 1081367 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 19:10:46.730220 1081367 command_runner.go:130] > # the cgroup blockio controller.
	I0717 19:10:46.730225 1081367 command_runner.go:130] > # blockio_config_file = ""
	I0717 19:10:46.730233 1081367 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 19:10:46.730237 1081367 command_runner.go:130] > # irqbalance daemon.
	I0717 19:10:46.730245 1081367 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 19:10:46.730251 1081367 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 19:10:46.730258 1081367 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:10:46.730262 1081367 command_runner.go:130] > # rdt_config_file = ""
	I0717 19:10:46.730271 1081367 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 19:10:46.730275 1081367 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 19:10:46.730281 1081367 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 19:10:46.730286 1081367 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 19:10:46.730292 1081367 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 19:10:46.730300 1081367 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 19:10:46.730304 1081367 command_runner.go:130] > # will be added.
	I0717 19:10:46.730311 1081367 command_runner.go:130] > # default_capabilities = [
	I0717 19:10:46.730314 1081367 command_runner.go:130] > # 	"CHOWN",
	I0717 19:10:46.730318 1081367 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 19:10:46.730322 1081367 command_runner.go:130] > # 	"FSETID",
	I0717 19:10:46.730325 1081367 command_runner.go:130] > # 	"FOWNER",
	I0717 19:10:46.730331 1081367 command_runner.go:130] > # 	"SETGID",
	I0717 19:10:46.730335 1081367 command_runner.go:130] > # 	"SETUID",
	I0717 19:10:46.730341 1081367 command_runner.go:130] > # 	"SETPCAP",
	I0717 19:10:46.730344 1081367 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 19:10:46.730348 1081367 command_runner.go:130] > # 	"KILL",
	I0717 19:10:46.730354 1081367 command_runner.go:130] > # ]
	I0717 19:10:46.730360 1081367 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 19:10:46.730365 1081367 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:10:46.730372 1081367 command_runner.go:130] > # default_sysctls = [
	I0717 19:10:46.730375 1081367 command_runner.go:130] > # ]
	I0717 19:10:46.730380 1081367 command_runner.go:130] > # List of devices on the host that a
	I0717 19:10:46.730388 1081367 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 19:10:46.730392 1081367 command_runner.go:130] > # allowed_devices = [
	I0717 19:10:46.730399 1081367 command_runner.go:130] > # 	"/dev/fuse",
	I0717 19:10:46.730403 1081367 command_runner.go:130] > # ]
	I0717 19:10:46.730407 1081367 command_runner.go:130] > # List of additional devices. specified as
	I0717 19:10:46.730415 1081367 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 19:10:46.730422 1081367 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 19:10:46.730441 1081367 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:10:46.730447 1081367 command_runner.go:130] > # additional_devices = [
	I0717 19:10:46.730451 1081367 command_runner.go:130] > # ]
	I0717 19:10:46.730456 1081367 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 19:10:46.730460 1081367 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 19:10:46.730464 1081367 command_runner.go:130] > # 	"/etc/cdi",
	I0717 19:10:46.730470 1081367 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 19:10:46.730474 1081367 command_runner.go:130] > # ]
	I0717 19:10:46.730483 1081367 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 19:10:46.730488 1081367 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 19:10:46.730494 1081367 command_runner.go:130] > # Defaults to false.
	I0717 19:10:46.730499 1081367 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 19:10:46.730505 1081367 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 19:10:46.730511 1081367 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 19:10:46.730518 1081367 command_runner.go:130] > # hooks_dir = [
	I0717 19:10:46.730522 1081367 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 19:10:46.730528 1081367 command_runner.go:130] > # ]
	I0717 19:10:46.730534 1081367 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 19:10:46.730542 1081367 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 19:10:46.730548 1081367 command_runner.go:130] > # its default mounts from the following two files:
	I0717 19:10:46.730551 1081367 command_runner.go:130] > #
	I0717 19:10:46.730558 1081367 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 19:10:46.730564 1081367 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 19:10:46.730572 1081367 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 19:10:46.730576 1081367 command_runner.go:130] > #
	I0717 19:10:46.730582 1081367 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 19:10:46.730590 1081367 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 19:10:46.730596 1081367 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 19:10:46.730603 1081367 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 19:10:46.730606 1081367 command_runner.go:130] > #
	I0717 19:10:46.730612 1081367 command_runner.go:130] > # default_mounts_file = ""
	I0717 19:10:46.730617 1081367 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 19:10:46.730624 1081367 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 19:10:46.730630 1081367 command_runner.go:130] > pids_limit = 1024
	I0717 19:10:46.730636 1081367 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 19:10:46.730642 1081367 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 19:10:46.730649 1081367 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 19:10:46.730657 1081367 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 19:10:46.730664 1081367 command_runner.go:130] > # log_size_max = -1
	I0717 19:10:46.730670 1081367 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 19:10:46.730676 1081367 command_runner.go:130] > # log_to_journald = false
	I0717 19:10:46.730682 1081367 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 19:10:46.730690 1081367 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 19:10:46.730695 1081367 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 19:10:46.730701 1081367 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 19:10:46.730707 1081367 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 19:10:46.730714 1081367 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 19:10:46.730721 1081367 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 19:10:46.730728 1081367 command_runner.go:130] > # read_only = false
	I0717 19:10:46.730733 1081367 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 19:10:46.730742 1081367 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 19:10:46.730746 1081367 command_runner.go:130] > # live configuration reload.
	I0717 19:10:46.730752 1081367 command_runner.go:130] > # log_level = "info"
	I0717 19:10:46.730758 1081367 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 19:10:46.730765 1081367 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:10:46.730769 1081367 command_runner.go:130] > # log_filter = ""
	I0717 19:10:46.730777 1081367 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 19:10:46.730783 1081367 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 19:10:46.730789 1081367 command_runner.go:130] > # separated by comma.
	I0717 19:10:46.730793 1081367 command_runner.go:130] > # uid_mappings = ""
	I0717 19:10:46.730800 1081367 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 19:10:46.730806 1081367 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 19:10:46.730811 1081367 command_runner.go:130] > # separated by comma.
	I0717 19:10:46.730816 1081367 command_runner.go:130] > # gid_mappings = ""
	I0717 19:10:46.730822 1081367 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 19:10:46.730830 1081367 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:10:46.730835 1081367 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:10:46.730842 1081367 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 19:10:46.730848 1081367 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 19:10:46.730856 1081367 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:10:46.730862 1081367 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:10:46.730868 1081367 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 19:10:46.730874 1081367 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 19:10:46.730882 1081367 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 19:10:46.730887 1081367 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 19:10:46.730893 1081367 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 19:10:46.730899 1081367 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 19:10:46.730906 1081367 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 19:10:46.730912 1081367 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 19:10:46.730919 1081367 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 19:10:46.730925 1081367 command_runner.go:130] > drop_infra_ctr = false
	I0717 19:10:46.730934 1081367 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 19:10:46.730939 1081367 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 19:10:46.730949 1081367 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 19:10:46.730955 1081367 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 19:10:46.730960 1081367 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 19:10:46.730966 1081367 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 19:10:46.730973 1081367 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 19:10:46.730984 1081367 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 19:10:46.730991 1081367 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 19:10:46.730997 1081367 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 19:10:46.731005 1081367 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 19:10:46.731011 1081367 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 19:10:46.731017 1081367 command_runner.go:130] > # default_runtime = "runc"
	I0717 19:10:46.731023 1081367 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 19:10:46.731032 1081367 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 19:10:46.731043 1081367 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 19:10:46.731048 1081367 command_runner.go:130] > # creation as a file is not desired either.
	I0717 19:10:46.731057 1081367 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 19:10:46.731064 1081367 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 19:10:46.731068 1081367 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 19:10:46.731074 1081367 command_runner.go:130] > # ]
	I0717 19:10:46.731081 1081367 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 19:10:46.731088 1081367 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 19:10:46.731096 1081367 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 19:10:46.731109 1081367 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 19:10:46.731115 1081367 command_runner.go:130] > #
	I0717 19:10:46.731120 1081367 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 19:10:46.731127 1081367 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 19:10:46.731131 1081367 command_runner.go:130] > #  runtime_type = "oci"
	I0717 19:10:46.731136 1081367 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 19:10:46.731143 1081367 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 19:10:46.731148 1081367 command_runner.go:130] > #  allowed_annotations = []
	I0717 19:10:46.731154 1081367 command_runner.go:130] > # Where:
	I0717 19:10:46.731160 1081367 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 19:10:46.731170 1081367 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 19:10:46.731176 1081367 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 19:10:46.731184 1081367 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 19:10:46.731189 1081367 command_runner.go:130] > #   in $PATH.
	I0717 19:10:46.731195 1081367 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 19:10:46.731203 1081367 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 19:10:46.731209 1081367 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 19:10:46.731215 1081367 command_runner.go:130] > #   state.
	I0717 19:10:46.731221 1081367 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 19:10:46.731227 1081367 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 19:10:46.731235 1081367 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 19:10:46.731243 1081367 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 19:10:46.731249 1081367 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 19:10:46.731256 1081367 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 19:10:46.731260 1081367 command_runner.go:130] > #   The currently recognized values are:
	I0717 19:10:46.731269 1081367 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 19:10:46.731276 1081367 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 19:10:46.731284 1081367 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 19:10:46.731290 1081367 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 19:10:46.731299 1081367 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 19:10:46.731306 1081367 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 19:10:46.731314 1081367 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 19:10:46.731320 1081367 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 19:10:46.731327 1081367 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 19:10:46.731331 1081367 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 19:10:46.731340 1081367 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 19:10:46.731344 1081367 command_runner.go:130] > runtime_type = "oci"
	I0717 19:10:46.731350 1081367 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 19:10:46.731354 1081367 command_runner.go:130] > runtime_config_path = ""
	I0717 19:10:46.731358 1081367 command_runner.go:130] > monitor_path = ""
	I0717 19:10:46.731364 1081367 command_runner.go:130] > monitor_cgroup = ""
	I0717 19:10:46.731368 1081367 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 19:10:46.731374 1081367 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 19:10:46.731380 1081367 command_runner.go:130] > # running containers
	I0717 19:10:46.731384 1081367 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 19:10:46.731391 1081367 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 19:10:46.731422 1081367 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 19:10:46.731430 1081367 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 19:10:46.731435 1081367 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 19:10:46.731441 1081367 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 19:10:46.731445 1081367 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 19:10:46.731452 1081367 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 19:10:46.731457 1081367 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 19:10:46.731463 1081367 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 19:10:46.731470 1081367 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 19:10:46.731478 1081367 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 19:10:46.731484 1081367 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 19:10:46.731493 1081367 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 19:10:46.731502 1081367 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 19:10:46.731508 1081367 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 19:10:46.731516 1081367 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 19:10:46.731526 1081367 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 19:10:46.731532 1081367 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 19:10:46.731541 1081367 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 19:10:46.731545 1081367 command_runner.go:130] > # Example:
	I0717 19:10:46.731550 1081367 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 19:10:46.731557 1081367 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 19:10:46.731561 1081367 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 19:10:46.731567 1081367 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 19:10:46.731570 1081367 command_runner.go:130] > # cpuset = 0
	I0717 19:10:46.731576 1081367 command_runner.go:130] > # cpushares = "0-1"
	I0717 19:10:46.731579 1081367 command_runner.go:130] > # Where:
	I0717 19:10:46.731586 1081367 command_runner.go:130] > # The workload name is workload-type.
	I0717 19:10:46.731593 1081367 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 19:10:46.731600 1081367 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 19:10:46.731607 1081367 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 19:10:46.731618 1081367 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 19:10:46.731625 1081367 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 19:10:46.731628 1081367 command_runner.go:130] > # 
	I0717 19:10:46.731634 1081367 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 19:10:46.731640 1081367 command_runner.go:130] > #
	I0717 19:10:46.731646 1081367 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 19:10:46.731654 1081367 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 19:10:46.731660 1081367 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 19:10:46.731669 1081367 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 19:10:46.731677 1081367 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 19:10:46.731681 1081367 command_runner.go:130] > [crio.image]
	I0717 19:10:46.731689 1081367 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 19:10:46.731693 1081367 command_runner.go:130] > # default_transport = "docker://"
	I0717 19:10:46.731702 1081367 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 19:10:46.731708 1081367 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:10:46.731714 1081367 command_runner.go:130] > # global_auth_file = ""
	I0717 19:10:46.731719 1081367 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 19:10:46.731724 1081367 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:10:46.731732 1081367 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 19:10:46.731739 1081367 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 19:10:46.731747 1081367 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:10:46.731752 1081367 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:10:46.731759 1081367 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 19:10:46.731765 1081367 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 19:10:46.731773 1081367 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 19:10:46.731779 1081367 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 19:10:46.731787 1081367 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 19:10:46.731791 1081367 command_runner.go:130] > # pause_command = "/pause"
	I0717 19:10:46.731799 1081367 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 19:10:46.731806 1081367 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 19:10:46.731814 1081367 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 19:10:46.731820 1081367 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 19:10:46.731827 1081367 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 19:10:46.731831 1081367 command_runner.go:130] > # signature_policy = ""
	I0717 19:10:46.731838 1081367 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 19:10:46.731844 1081367 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 19:10:46.731850 1081367 command_runner.go:130] > # changing them here.
	I0717 19:10:46.731854 1081367 command_runner.go:130] > # insecure_registries = [
	I0717 19:10:46.731860 1081367 command_runner.go:130] > # ]
	I0717 19:10:46.731868 1081367 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 19:10:46.731876 1081367 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 19:10:46.731881 1081367 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 19:10:46.731888 1081367 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 19:10:46.731892 1081367 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 19:10:46.731898 1081367 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 19:10:46.731904 1081367 command_runner.go:130] > # CNI plugins.
	I0717 19:10:46.731907 1081367 command_runner.go:130] > [crio.network]
	I0717 19:10:46.731913 1081367 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 19:10:46.731921 1081367 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 19:10:46.731925 1081367 command_runner.go:130] > # cni_default_network = ""
	I0717 19:10:46.731933 1081367 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 19:10:46.731938 1081367 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 19:10:46.731944 1081367 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 19:10:46.731948 1081367 command_runner.go:130] > # plugin_dirs = [
	I0717 19:10:46.731951 1081367 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 19:10:46.731955 1081367 command_runner.go:130] > # ]
	I0717 19:10:46.731960 1081367 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 19:10:46.731966 1081367 command_runner.go:130] > [crio.metrics]
	I0717 19:10:46.731971 1081367 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 19:10:46.731975 1081367 command_runner.go:130] > enable_metrics = true
	I0717 19:10:46.731984 1081367 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 19:10:46.731991 1081367 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 19:10:46.731997 1081367 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 19:10:46.732005 1081367 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 19:10:46.732011 1081367 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 19:10:46.732016 1081367 command_runner.go:130] > # metrics_collectors = [
	I0717 19:10:46.732020 1081367 command_runner.go:130] > # 	"operations",
	I0717 19:10:46.732027 1081367 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 19:10:46.732032 1081367 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 19:10:46.732038 1081367 command_runner.go:130] > # 	"operations_errors",
	I0717 19:10:46.732042 1081367 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 19:10:46.732046 1081367 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 19:10:46.732050 1081367 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 19:10:46.732054 1081367 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 19:10:46.732061 1081367 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 19:10:46.732065 1081367 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 19:10:46.732072 1081367 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 19:10:46.732077 1081367 command_runner.go:130] > # 	"containers_oom_total",
	I0717 19:10:46.732084 1081367 command_runner.go:130] > # 	"containers_oom",
	I0717 19:10:46.732088 1081367 command_runner.go:130] > # 	"processes_defunct",
	I0717 19:10:46.732094 1081367 command_runner.go:130] > # 	"operations_total",
	I0717 19:10:46.732098 1081367 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 19:10:46.732105 1081367 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 19:10:46.732110 1081367 command_runner.go:130] > # 	"operations_errors_total",
	I0717 19:10:46.732115 1081367 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 19:10:46.732120 1081367 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 19:10:46.732126 1081367 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 19:10:46.732130 1081367 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 19:10:46.732134 1081367 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 19:10:46.732141 1081367 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 19:10:46.732144 1081367 command_runner.go:130] > # ]
	I0717 19:10:46.732152 1081367 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 19:10:46.732156 1081367 command_runner.go:130] > # metrics_port = 9090
	I0717 19:10:46.732161 1081367 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 19:10:46.732167 1081367 command_runner.go:130] > # metrics_socket = ""
	I0717 19:10:46.732175 1081367 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 19:10:46.732181 1081367 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 19:10:46.732190 1081367 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 19:10:46.732196 1081367 command_runner.go:130] > # certificate on any modification event.
	I0717 19:10:46.732202 1081367 command_runner.go:130] > # metrics_cert = ""
	I0717 19:10:46.732208 1081367 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 19:10:46.732214 1081367 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 19:10:46.732218 1081367 command_runner.go:130] > # metrics_key = ""
	I0717 19:10:46.732224 1081367 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 19:10:46.732230 1081367 command_runner.go:130] > [crio.tracing]
	I0717 19:10:46.732236 1081367 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 19:10:46.732242 1081367 command_runner.go:130] > # enable_tracing = false
	I0717 19:10:46.732248 1081367 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 19:10:46.732255 1081367 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 19:10:46.732260 1081367 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 19:10:46.732267 1081367 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 19:10:46.732274 1081367 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 19:10:46.732280 1081367 command_runner.go:130] > [crio.stats]
	I0717 19:10:46.732286 1081367 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 19:10:46.732294 1081367 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 19:10:46.732298 1081367 command_runner.go:130] > # stats_collection_period = 0
	I0717 19:10:46.732373 1081367 cni.go:84] Creating CNI manager for ""
	I0717 19:10:46.732383 1081367 cni.go:137] 2 nodes found, recommending kindnet
	I0717 19:10:46.732395 1081367 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:10:46.732417 1081367 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.49 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-464644 NodeName:multinode-464644-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:10:46.732552 1081367 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-464644-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:10:46.732608 1081367 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-464644-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-464644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:10:46.732668 1081367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:10:46.742954 1081367 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.27.3': No such file or directory
	I0717 19:10:46.743122 1081367 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.27.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.27.3': No such file or directory
	
	Initiating transfer...
	I0717 19:10:46.743199 1081367 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.27.3
	I0717 19:10:46.753379 1081367 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl.sha256
	I0717 19:10:46.753399 1081367 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/linux/amd64/v1.27.3/kubelet
	I0717 19:10:46.753405 1081367 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/linux/amd64/v1.27.3/kubeadm
	I0717 19:10:46.753417 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/linux/amd64/v1.27.3/kubectl -> /var/lib/minikube/binaries/v1.27.3/kubectl
	I0717 19:10:46.753513 1081367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubectl
	I0717 19:10:46.761197 1081367 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubectl': No such file or directory
	I0717 19:10:46.761237 1081367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubectl': No such file or directory
	I0717 19:10:46.761269 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/linux/amd64/v1.27.3/kubectl --> /var/lib/minikube/binaries/v1.27.3/kubectl (49258496 bytes)
	I0717 19:10:47.621380 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/linux/amd64/v1.27.3/kubeadm -> /var/lib/minikube/binaries/v1.27.3/kubeadm
	I0717 19:10:47.621474 1081367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubeadm
	I0717 19:10:47.627638 1081367 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubeadm': No such file or directory
	I0717 19:10:47.627976 1081367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubeadm': No such file or directory
	I0717 19:10:47.628025 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/linux/amd64/v1.27.3/kubeadm --> /var/lib/minikube/binaries/v1.27.3/kubeadm (48160768 bytes)
	I0717 19:10:48.466657 1081367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:10:48.482153 1081367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/linux/amd64/v1.27.3/kubelet -> /var/lib/minikube/binaries/v1.27.3/kubelet
	I0717 19:10:48.482298 1081367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubelet
	I0717 19:10:48.487674 1081367 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubelet': No such file or directory
	I0717 19:10:48.487736 1081367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubelet': No such file or directory
	I0717 19:10:48.487767 1081367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/linux/amd64/v1.27.3/kubelet --> /var/lib/minikube/binaries/v1.27.3/kubelet (106160128 bytes)
	I0717 19:10:49.018743 1081367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 19:10:49.028613 1081367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0717 19:10:49.046638 1081367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:10:49.065006 1081367 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0717 19:10:49.069450 1081367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:10:49.081927 1081367 host.go:66] Checking if "multinode-464644" exists ...
	I0717 19:10:49.082207 1081367 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:10:49.082346 1081367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:10:49.082391 1081367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:10:49.099561 1081367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0717 19:10:49.100005 1081367 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:10:49.100505 1081367 main.go:141] libmachine: Using API Version  1
	I0717 19:10:49.100526 1081367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:10:49.100940 1081367 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:10:49.101201 1081367 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:10:49.101364 1081367 start.go:304] JoinCluster: &{Name:multinode-464644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-464644 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.49 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:10:49.101466 1081367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 19:10:49.101492 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:10:49.104923 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:10:49.105410 1081367 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:10:49.105444 1081367 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:10:49.105652 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:10:49.105974 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:10:49.106261 1081367 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:10:49.106509 1081367 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:10:49.283002 1081367 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ufbsog.5qe5xkkyjq1nn6sv --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 19:10:49.290974 1081367 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.49 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 19:10:49.291038 1081367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ufbsog.5qe5xkkyjq1nn6sv --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-464644-m02"
	I0717 19:10:49.340493 1081367 command_runner.go:130] ! W0717 19:10:49.330038     823 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0717 19:10:49.471467 1081367 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:10:52.087964 1081367 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 19:10:52.088024 1081367 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0717 19:10:52.088043 1081367 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0717 19:10:52.088056 1081367 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:10:52.088066 1081367 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:10:52.088074 1081367 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 19:10:52.088085 1081367 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0717 19:10:52.088095 1081367 command_runner.go:130] > This node has joined the cluster:
	I0717 19:10:52.088109 1081367 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0717 19:10:52.088122 1081367 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0717 19:10:52.088133 1081367 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0717 19:10:52.088163 1081367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ufbsog.5qe5xkkyjq1nn6sv --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-464644-m02": (2.797106549s)
	I0717 19:10:52.088199 1081367 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 19:10:52.382903 1081367 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0717 19:10:52.382958 1081367 start.go:306] JoinCluster complete in 3.281593515s
	I0717 19:10:52.382973 1081367 cni.go:84] Creating CNI manager for ""
	I0717 19:10:52.382978 1081367 cni.go:137] 2 nodes found, recommending kindnet
	I0717 19:10:52.383045 1081367 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 19:10:52.389353 1081367 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 19:10:52.389385 1081367 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0717 19:10:52.389393 1081367 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0717 19:10:52.389399 1081367 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:10:52.389407 1081367 command_runner.go:130] > Access: 2023-07-17 19:09:18.114596432 +0000
	I0717 19:10:52.389412 1081367 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0717 19:10:52.389417 1081367 command_runner.go:130] > Change: 2023-07-17 19:09:16.153596432 +0000
	I0717 19:10:52.389421 1081367 command_runner.go:130] >  Birth: -
	I0717 19:10:52.389487 1081367 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 19:10:52.389499 1081367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 19:10:52.414625 1081367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 19:10:52.833391 1081367 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0717 19:10:52.838262 1081367 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0717 19:10:52.841920 1081367 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0717 19:10:52.856103 1081367 command_runner.go:130] > daemonset.apps/kindnet configured
	I0717 19:10:52.859624 1081367 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:10:52.859870 1081367 kapi.go:59] client config for multinode-464644: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:10:52.860217 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 19:10:52.860230 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:52.860238 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:52.860245 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:52.863048 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:52.863080 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:52.863092 1081367 round_trippers.go:580]     Content-Length: 291
	I0717 19:10:52.863101 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:52 GMT
	I0717 19:10:52.863109 1081367 round_trippers.go:580]     Audit-Id: db01989f-396f-438b-9b22-42dd78e82068
	I0717 19:10:52.863119 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:52.863127 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:52.863140 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:52.863152 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:52.863189 1081367 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"06c3326f-def8-45bf-a91d-f07feefe253d","resourceVersion":"452","creationTimestamp":"2023-07-17T19:09:54Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0717 19:10:52.863347 1081367 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-464644" context rescaled to 1 replicas
	I0717 19:10:52.863388 1081367 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.49 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 19:10:52.867229 1081367 out.go:177] * Verifying Kubernetes components...
	I0717 19:10:52.869294 1081367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:10:52.898154 1081367 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:10:52.898374 1081367 kapi.go:59] client config for multinode-464644: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:10:52.898627 1081367 node_ready.go:35] waiting up to 6m0s for node "multinode-464644-m02" to be "Ready" ...
	I0717 19:10:52.898697 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:52.898704 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:52.898711 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:52.898720 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:52.901925 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:52.901951 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:52.901958 1081367 round_trippers.go:580]     Content-Length: 3639
	I0717 19:10:52.901964 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:52 GMT
	I0717 19:10:52.901969 1081367 round_trippers.go:580]     Audit-Id: d19e2998-e83a-4310-8fa8-bde6d339e686
	I0717 19:10:52.901975 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:52.901989 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:52.902003 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:52.902015 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:52.902183 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"507","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2615 chars]
	I0717 19:10:53.403528 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:53.403554 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:53.403563 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:53.403569 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:53.406926 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:53.406965 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:53.406977 1081367 round_trippers.go:580]     Audit-Id: 62416735-d6ac-4dc0-b639-9588c0c32360
	I0717 19:10:53.406985 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:53.406993 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:53.407001 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:53.407010 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:53.407018 1081367 round_trippers.go:580]     Content-Length: 3639
	I0717 19:10:53.407027 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:53 GMT
	I0717 19:10:53.407158 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"507","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2615 chars]
	I0717 19:10:53.903812 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:53.903844 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:53.903857 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:53.903864 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:53.906878 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:53.906914 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:53.906926 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:53 GMT
	I0717 19:10:53.906934 1081367 round_trippers.go:580]     Audit-Id: 5aa1452c-aff7-4d42-b36b-534c88ad82d1
	I0717 19:10:53.906943 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:53.906951 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:53.906964 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:53.906972 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:53.906982 1081367 round_trippers.go:580]     Content-Length: 3639
	I0717 19:10:53.907085 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"507","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2615 chars]
	I0717 19:10:54.403481 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:54.403516 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:54.403536 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:54.403546 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:54.415409 1081367 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0717 19:10:54.415447 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:54.415459 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:54.415469 1081367 round_trippers.go:580]     Content-Length: 3639
	I0717 19:10:54.415479 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:54 GMT
	I0717 19:10:54.415489 1081367 round_trippers.go:580]     Audit-Id: 82b0aa54-730c-4e0d-affb-61e010ee3009
	I0717 19:10:54.415497 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:54.415502 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:54.415509 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:54.415634 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"507","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2615 chars]
	I0717 19:10:54.903116 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:54.903153 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:54.903166 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:54.903177 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:54.907039 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:54.907063 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:54.907071 1081367 round_trippers.go:580]     Content-Length: 3639
	I0717 19:10:54.907076 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:54 GMT
	I0717 19:10:54.907082 1081367 round_trippers.go:580]     Audit-Id: e34e0f3f-1bd1-48fb-a88d-a763d4ac511f
	I0717 19:10:54.907087 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:54.907092 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:54.907097 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:54.907102 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:54.907153 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"507","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2615 chars]
	I0717 19:10:54.907436 1081367 node_ready.go:58] node "multinode-464644-m02" has status "Ready":"False"
	I0717 19:10:55.402843 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:55.402877 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:55.402896 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:55.402906 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:55.407161 1081367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:10:55.407192 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:55.407204 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:55.407214 1081367 round_trippers.go:580]     Content-Length: 3639
	I0717 19:10:55.407222 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:55 GMT
	I0717 19:10:55.407232 1081367 round_trippers.go:580]     Audit-Id: 7ee97360-39bf-453b-a578-f2bd20fe1715
	I0717 19:10:55.407241 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:55.407248 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:55.407257 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:55.407372 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"507","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2615 chars]
	I0717 19:10:55.902894 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:55.902924 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:55.902933 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:55.902939 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:55.906930 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:55.906960 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:55.906968 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:55.906976 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:55.906984 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:55.906993 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:55.907003 1081367 round_trippers.go:580]     Content-Length: 3639
	I0717 19:10:55.907012 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:55 GMT
	I0717 19:10:55.907019 1081367 round_trippers.go:580]     Audit-Id: bb3f52cf-6cf9-4d37-9ca7-9a4d566165a2
	I0717 19:10:55.907122 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"507","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2615 chars]
	I0717 19:10:56.403680 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:56.403710 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:56.403721 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:56.403731 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:56.407124 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:56.407154 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:56.407163 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:56.407169 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:56.407175 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:56.407180 1081367 round_trippers.go:580]     Content-Length: 3639
	I0717 19:10:56.407186 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:56 GMT
	I0717 19:10:56.407191 1081367 round_trippers.go:580]     Audit-Id: 86b6e94c-a473-4d35-b2e8-05217ca482ac
	I0717 19:10:56.407196 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:56.407311 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"507","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2615 chars]
	I0717 19:10:56.903805 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:56.903901 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:56.903913 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:56.903920 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:56.907973 1081367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:10:56.908011 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:56.908025 1081367 round_trippers.go:580]     Content-Length: 3639
	I0717 19:10:56.908037 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:56 GMT
	I0717 19:10:56.908047 1081367 round_trippers.go:580]     Audit-Id: 1b67ffc1-a506-47c7-9942-6b013252a7d0
	I0717 19:10:56.908056 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:56.908070 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:56.908079 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:56.908088 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:56.908209 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"507","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2615 chars]
	I0717 19:10:56.908538 1081367 node_ready.go:58] node "multinode-464644-m02" has status "Ready":"False"
	I0717 19:10:57.403733 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:57.403760 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:57.403769 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:57.403775 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:57.407232 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:57.407275 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:57.407286 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:57 GMT
	I0717 19:10:57.407295 1081367 round_trippers.go:580]     Audit-Id: daae207e-ceb1-4adb-bb57-bfb0f8b2549c
	I0717 19:10:57.407305 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:57.407313 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:57.407321 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:57.407330 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:57.407338 1081367 round_trippers.go:580]     Content-Length: 3639
	I0717 19:10:57.407438 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"507","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2615 chars]
	I0717 19:10:57.903010 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:57.903038 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:57.903049 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:57.903057 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:57.906305 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:57.906337 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:57.906347 1081367 round_trippers.go:580]     Audit-Id: 241c43e0-6813-43d6-967e-0cabe9778df3
	I0717 19:10:57.906355 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:57.906364 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:57.906372 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:57.906379 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:57.906385 1081367 round_trippers.go:580]     Content-Length: 3639
	I0717 19:10:57.906393 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:57 GMT
	I0717 19:10:57.906449 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"507","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2615 chars]
	I0717 19:10:58.402754 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:58.402787 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:58.402798 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:58.402808 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:58.405964 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:58.405996 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:58.406008 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:58.406016 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:58.406024 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:58.406032 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:58.406040 1081367 round_trippers.go:580]     Content-Length: 3639
	I0717 19:10:58.406048 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:58 GMT
	I0717 19:10:58.406056 1081367 round_trippers.go:580]     Audit-Id: 2f3436e8-0ec7-4ae7-ad47-8ff536dc1777
	I0717 19:10:58.406147 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"507","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2615 chars]
	I0717 19:10:58.902745 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:58.902771 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:58.902780 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:58.902786 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:58.906011 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:58.906043 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:58.906053 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:58.906061 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:58.906069 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:58.906077 1081367 round_trippers.go:580]     Content-Length: 3639
	I0717 19:10:58.906086 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:58 GMT
	I0717 19:10:58.906095 1081367 round_trippers.go:580]     Audit-Id: 5edad217-10f9-4a1a-87f2-7e09f8cde0ca
	I0717 19:10:58.906106 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:58.906205 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"507","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2615 chars]
	I0717 19:10:59.403640 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:59.403668 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:59.403676 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:59.403682 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:59.407062 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:59.407088 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:59.407096 1081367 round_trippers.go:580]     Content-Length: 3725
	I0717 19:10:59.407102 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:59 GMT
	I0717 19:10:59.407107 1081367 round_trippers.go:580]     Audit-Id: caadb23c-6ef9-4c5d-bc62-b70407530ec1
	I0717 19:10:59.407121 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:59.407126 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:59.407131 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:59.407137 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:59.407218 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"528","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2701 chars]
	I0717 19:10:59.407490 1081367 node_ready.go:49] node "multinode-464644-m02" has status "Ready":"True"
	I0717 19:10:59.407506 1081367 node_ready.go:38] duration metric: took 6.508864672s waiting for node "multinode-464644-m02" to be "Ready" ...
	I0717 19:10:59.407514 1081367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:10:59.407573 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:10:59.407580 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:59.407586 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:59.407594 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:59.412036 1081367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:10:59.412081 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:59.412092 1081367 round_trippers.go:580]     Audit-Id: 0155e3a4-8c8a-4a25-b4a1-03992b281668
	I0717 19:10:59.412101 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:59.412109 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:59.412116 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:59.412125 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:59.412137 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:59 GMT
	I0717 19:10:59.412789 1081367 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"528"},"items":[{"metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"448","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67366 chars]
	I0717 19:10:59.415181 1081367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:59.415271 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:10:59.415279 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:59.415287 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:59.415295 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:59.418096 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:59.418121 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:59.418131 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:59.418140 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:59 GMT
	I0717 19:10:59.418148 1081367 round_trippers.go:580]     Audit-Id: 7ad3fb34-3cdc-4888-a8e2-d0bc175ce21b
	I0717 19:10:59.418155 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:59.418163 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:59.418170 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:59.418279 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"448","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0717 19:10:59.418768 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:59.418785 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:59.418795 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:59.418804 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:59.421177 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:59.421198 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:59.421206 1081367 round_trippers.go:580]     Audit-Id: 28bf351d-5267-49c1-88f8-01f30a5ba805
	I0717 19:10:59.421211 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:59.421217 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:59.421222 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:59.421227 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:59.421233 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:59 GMT
	I0717 19:10:59.421354 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:59.421702 1081367 pod_ready.go:92] pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace has status "Ready":"True"
	I0717 19:10:59.421718 1081367 pod_ready.go:81] duration metric: took 6.511755ms waiting for pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:59.421729 1081367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:59.421786 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-464644
	I0717 19:10:59.421793 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:59.421800 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:59.421806 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:59.424186 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:59.424206 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:59.424216 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:59.424224 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:59.424239 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:59.424254 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:59.424265 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:59 GMT
	I0717 19:10:59.424278 1081367 round_trippers.go:580]     Audit-Id: 2dde2361-8a2a-4a95-8beb-2ef7e68208b2
	I0717 19:10:59.424467 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-464644","namespace":"kube-system","uid":"b672d395-d32d-4198-b486-d9cff48d8b9a","resourceVersion":"433","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.mirror":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.seen":"2023-07-17T19:09:54.339578401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0717 19:10:59.425007 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:59.425022 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:59.425032 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:59.425042 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:59.427442 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:59.427464 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:59.427471 1081367 round_trippers.go:580]     Audit-Id: 0acb740c-3421-4ee4-bafa-c2ddf06b6b8e
	I0717 19:10:59.427477 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:59.427482 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:59.427487 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:59.427496 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:59.427501 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:59 GMT
	I0717 19:10:59.427643 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:59.427961 1081367 pod_ready.go:92] pod "etcd-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:10:59.427979 1081367 pod_ready.go:81] duration metric: took 6.242141ms waiting for pod "etcd-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:59.427994 1081367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:59.428057 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-464644
	I0717 19:10:59.428064 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:59.428071 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:59.428077 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:59.430808 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:59.430833 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:59.430840 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:59.430846 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:59.430852 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:59.430865 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:59 GMT
	I0717 19:10:59.430874 1081367 round_trippers.go:580]     Audit-Id: 656b35e9-a1b8-4b04-8c00-5caf86f1598e
	I0717 19:10:59.430882 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:59.430998 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-464644","namespace":"kube-system","uid":"dd6e14e2-0b92-42b9-b6a2-1562c2c70903","resourceVersion":"432","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.174:8443","kubernetes.io/config.hash":"b280034e13df00701aec7afc575fcc6c","kubernetes.io/config.mirror":"b280034e13df00701aec7afc575fcc6c","kubernetes.io/config.seen":"2023-07-17T19:09:54.339586957Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0717 19:10:59.431418 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:59.431430 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:59.431437 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:59.431443 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:59.433789 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:59.433811 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:59.433818 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:59.433824 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:59.433829 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:59.433835 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:59.433840 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:59 GMT
	I0717 19:10:59.433845 1081367 round_trippers.go:580]     Audit-Id: 344b8e6b-7be8-4ab4-b6ba-20550a022492
	I0717 19:10:59.433961 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:59.434264 1081367 pod_ready.go:92] pod "kube-apiserver-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:10:59.434277 1081367 pod_ready.go:81] duration metric: took 6.273558ms waiting for pod "kube-apiserver-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:59.434286 1081367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:59.434341 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-464644
	I0717 19:10:59.434348 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:59.434355 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:59.434361 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:59.436649 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:59.436679 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:59.436691 1081367 round_trippers.go:580]     Audit-Id: f69576aa-1bd4-43fe-a2f5-584089590164
	I0717 19:10:59.436700 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:59.436712 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:59.436725 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:59.436736 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:59.436745 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:59 GMT
	I0717 19:10:59.436929 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-464644","namespace":"kube-system","uid":"6b598e8b-6c96-4014-b0a2-de37f107a0e9","resourceVersion":"430","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"323b8f41b30f0969feab8ff61a3ecabd","kubernetes.io/config.mirror":"323b8f41b30f0969feab8ff61a3ecabd","kubernetes.io/config.seen":"2023-07-17T19:09:54.339588566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0717 19:10:59.437601 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:10:59.437624 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:59.437637 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:59.437651 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:59.440182 1081367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:10:59.440213 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:59.440224 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:59.440233 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:59.440241 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:59.440250 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:59 GMT
	I0717 19:10:59.440259 1081367 round_trippers.go:580]     Audit-Id: 12e829fb-9226-48b7-9d5f-518c127d2c93
	I0717 19:10:59.440276 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:59.440419 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:10:59.440875 1081367 pod_ready.go:92] pod "kube-controller-manager-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:10:59.440896 1081367 pod_ready.go:81] duration metric: took 6.601628ms waiting for pod "kube-controller-manager-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:59.440911 1081367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6ds6" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:59.604365 1081367 request.go:628] Waited for 163.370725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6ds6
	I0717 19:10:59.604448 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6ds6
	I0717 19:10:59.604454 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:59.604465 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:59.604500 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:59.616215 1081367 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0717 19:10:59.616242 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:59.616250 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:59 GMT
	I0717 19:10:59.616259 1081367 round_trippers.go:580]     Audit-Id: 63ce5f73-7688-4d81-b4d0-e5bc01917647
	I0717 19:10:59.616269 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:59.616278 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:59.616288 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:59.616296 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:59.616434 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j6ds6","generateName":"kube-proxy-","namespace":"kube-system","uid":"439bb5b7-0e46-4762-a9a7-e648a212ad93","resourceVersion":"518","creationTimestamp":"2023-07-17T19:10:52Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0717 19:10:59.804473 1081367 request.go:628] Waited for 187.435991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:59.804564 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:10:59.804569 1081367 round_trippers.go:469] Request Headers:
	I0717 19:10:59.804578 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:10:59.804584 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:10:59.808045 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:10:59.808073 1081367 round_trippers.go:577] Response Headers:
	I0717 19:10:59.808082 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:10:59.808092 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:10:59.808101 1081367 round_trippers.go:580]     Content-Length: 3725
	I0717 19:10:59.808111 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:10:59 GMT
	I0717 19:10:59.808120 1081367 round_trippers.go:580]     Audit-Id: 27886b65-1003-49ea-830e-6de0131502ed
	I0717 19:10:59.808130 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:10:59.808136 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:10:59.808248 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"528","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2701 chars]
	I0717 19:10:59.808613 1081367 pod_ready.go:92] pod "kube-proxy-j6ds6" in "kube-system" namespace has status "Ready":"True"
	I0717 19:10:59.808633 1081367 pod_ready.go:81] duration metric: took 367.713718ms waiting for pod "kube-proxy-j6ds6" in "kube-system" namespace to be "Ready" ...
	I0717 19:10:59.808646 1081367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qwsn5" in "kube-system" namespace to be "Ready" ...
	I0717 19:11:00.004171 1081367 request.go:628] Waited for 195.436588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qwsn5
	I0717 19:11:00.004251 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qwsn5
	I0717 19:11:00.004256 1081367 round_trippers.go:469] Request Headers:
	I0717 19:11:00.004264 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:11:00.004271 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:11:00.007362 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:11:00.007393 1081367 round_trippers.go:577] Response Headers:
	I0717 19:11:00.007403 1081367 round_trippers.go:580]     Audit-Id: 9901d71d-fc71-4799-957e-f1aa8e73708b
	I0717 19:11:00.007413 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:11:00.007423 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:11:00.007433 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:11:00.007444 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:11:00.007452 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:11:00 GMT
	I0717 19:11:00.007656 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qwsn5","generateName":"kube-proxy-","namespace":"kube-system","uid":"50e3f5e0-00d9-4412-b4de-649bc29733e9","resourceVersion":"412","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 19:11:00.204618 1081367 request.go:628] Waited for 196.440955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:11:00.204707 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:11:00.204714 1081367 round_trippers.go:469] Request Headers:
	I0717 19:11:00.204727 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:11:00.204734 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:11:00.208015 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:11:00.208044 1081367 round_trippers.go:577] Response Headers:
	I0717 19:11:00.208052 1081367 round_trippers.go:580]     Audit-Id: a2ad15db-17ca-47fa-bee4-99aed3a776f1
	I0717 19:11:00.208060 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:11:00.208070 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:11:00.208079 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:11:00.208089 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:11:00.208098 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:11:00 GMT
	I0717 19:11:00.208255 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:11:00.208714 1081367 pod_ready.go:92] pod "kube-proxy-qwsn5" in "kube-system" namespace has status "Ready":"True"
	I0717 19:11:00.208735 1081367 pod_ready.go:81] duration metric: took 400.079769ms waiting for pod "kube-proxy-qwsn5" in "kube-system" namespace to be "Ready" ...
	I0717 19:11:00.208750 1081367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:11:00.404245 1081367 request.go:628] Waited for 195.412206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-464644
	I0717 19:11:00.404376 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-464644
	I0717 19:11:00.404389 1081367 round_trippers.go:469] Request Headers:
	I0717 19:11:00.404397 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:11:00.404404 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:11:00.407445 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:11:00.407474 1081367 round_trippers.go:577] Response Headers:
	I0717 19:11:00.407482 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:11:00.407491 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:11:00.407500 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:11:00 GMT
	I0717 19:11:00.407509 1081367 round_trippers.go:580]     Audit-Id: af0e8d69-326d-4f4b-9126-f4f57653fad2
	I0717 19:11:00.407518 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:11:00.407527 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:11:00.407651 1081367 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-464644","namespace":"kube-system","uid":"04e5660d-abb0-432a-861e-c5c242edfb98","resourceVersion":"431","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6435a6b37c43f83175753c4199c85407","kubernetes.io/config.mirror":"6435a6b37c43f83175753c4199c85407","kubernetes.io/config.seen":"2023-07-17T19:09:54.339590320Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0717 19:11:00.604496 1081367 request.go:628] Waited for 196.42908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:11:00.604604 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:11:00.604611 1081367 round_trippers.go:469] Request Headers:
	I0717 19:11:00.604624 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:11:00.604633 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:11:00.609694 1081367 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 19:11:00.609798 1081367 round_trippers.go:577] Response Headers:
	I0717 19:11:00.609824 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:11:00.609835 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:11:00.609850 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:11:00.609863 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:11:00.609874 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:11:00 GMT
	I0717 19:11:00.609884 1081367 round_trippers.go:580]     Audit-Id: 3b7db77e-e108-4486-a3f0-5cc5da83ca9a
	I0717 19:11:00.610029 1081367 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 19:11:00.610386 1081367 pod_ready.go:92] pod "kube-scheduler-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:11:00.610405 1081367 pod_ready.go:81] duration metric: took 401.644402ms waiting for pod "kube-scheduler-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:11:00.610420 1081367 pod_ready.go:38] duration metric: took 1.20289669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:11:00.610444 1081367 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:11:00.610511 1081367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:11:00.625409 1081367 system_svc.go:56] duration metric: took 14.956102ms WaitForService to wait for kubelet.
	I0717 19:11:00.625444 1081367 kubeadm.go:581] duration metric: took 7.762023691s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 19:11:00.625476 1081367 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:11:00.803920 1081367 request.go:628] Waited for 178.330585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes
	I0717 19:11:00.803982 1081367 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes
	I0717 19:11:00.803993 1081367 round_trippers.go:469] Request Headers:
	I0717 19:11:00.804002 1081367 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:11:00.804009 1081367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:11:00.807574 1081367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:11:00.807607 1081367 round_trippers.go:577] Response Headers:
	I0717 19:11:00.807625 1081367 round_trippers.go:580]     Audit-Id: 80709674-97af-4a41-b7de-7d7a86f3ab16
	I0717 19:11:00.807633 1081367 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:11:00.807641 1081367 round_trippers.go:580]     Content-Type: application/json
	I0717 19:11:00.807648 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:11:00.807655 1081367 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:11:00.807662 1081367 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:11:00 GMT
	I0717 19:11:00.807951 1081367 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"529"},"items":[{"metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"423","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9645 chars]
	I0717 19:11:00.808451 1081367 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:11:00.808472 1081367 node_conditions.go:123] node cpu capacity is 2
	I0717 19:11:00.808483 1081367 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:11:00.808487 1081367 node_conditions.go:123] node cpu capacity is 2
	I0717 19:11:00.808491 1081367 node_conditions.go:105] duration metric: took 183.010555ms to run NodePressure ...
	I0717 19:11:00.808503 1081367 start.go:228] waiting for startup goroutines ...
	I0717 19:11:00.808541 1081367 start.go:242] writing updated cluster config ...
	I0717 19:11:00.808839 1081367 ssh_runner.go:195] Run: rm -f paused
	I0717 19:11:00.862077 1081367 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 19:11:00.866160 1081367 out.go:177] * Done! kubectl is now configured to use "multinode-464644" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 19:09:17 UTC, ends at Mon 2023-07-17 19:11:08 UTC. --
	Jul 17 19:11:07 multinode-464644 crio[715]: time="2023-07-17 19:11:07.757363161Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:bfebfb266bbe968061752fe1f6249f88a2dfad6fcab83b565942d284ebf4de95,Metadata:&PodSandboxMetadata{Name:busybox-67b7f59bb-jgj4t,Uid:fe524d58-c36b-41da-82eb-f0336652f7c2,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689621062076929333,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,pod-template-hash: 67b7f59bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T19:11:01.738475512Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2806036aae7210824873e26db35deb94e7b51f03204a3a9fd3ef4fec76c804e5,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-wqj4s,Uid:a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,Namespace:kube-system,Attempt:0,},
State:SANDBOX_READY,CreatedAt:1689621014417930458,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T19:10:14.057015368Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e40a866cdebe17bfd6c706bff422ef0bac20c3b71a1c4b4ddce124e384cc6f81,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:bd46cf29-49d3-4c0a-908e-a323a525d8d5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689621014409954652,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]strin
g{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-07-17T19:10:14.047391783Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:544c8365f6c38fb59de23e7d18d35ca4bce903f75b14dbfd5be760173cd025ab,Metadata:&PodSandboxMetadata{Name:kube-proxy-qwsn5,Uid:50e3f5e0-00d9-4412-b4de-649bc29733e9,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1689621009214997792,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29733e9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T19:10:08.525977414Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2cfed4ce7b15452ec8e4ea5f375652bd0ddeb2ca6ef519d175f0533e4873486d,Metadata:&PodSandboxMetadata{Name:kindnet-2tp5c,Uid:4e4881b0-4a20-4588-a87b-d2ba9c9b6939,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689621009205938214,Labels:map[string]string{app: kindnet,controller-revision-hash: 575d9d6996,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,k8s-app: kindnet,pod-template-generati
on: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T19:10:08.530556587Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45fdcad67a4a17de70b3159e1da0efc05ae7b7f52eec4db6157a504e77e9410f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-464644,Uid:6435a6b37c43f83175753c4199c85407,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689620985579126306,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6435a6b37c43f83175753c4199c85407,kubernetes.io/config.seen: 2023-07-17T19:09:44.953723797Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:82bb9bff98d9fd8a319632b2f0937ca6b531dc80e32ce311517b9021956e0499,Metadata:&PodSandboxMetadata{Name:kube-controller-manag
er-multinode-464644,Uid:323b8f41b30f0969feab8ff61a3ecabd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689620985568079248,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f0969feab8ff61a3ecabd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 323b8f41b30f0969feab8ff61a3ecabd,kubernetes.io/config.seen: 2023-07-17T19:09:44.953722519Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fdd7d323a47560c17b8194ab4029ff652d275b1d48f67f9d87d77e3b485a315c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-464644,Uid:b280034e13df00701aec7afc575fcc6c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689620985561214717,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-46
4644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.174:8443,kubernetes.io/config.hash: b280034e13df00701aec7afc575fcc6c,kubernetes.io/config.seen: 2023-07-17T19:09:44.953721195Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ae3eff3fd49fd3000f348d5a13baf55f90eba6c4a35434a2c906b1669a2ac17e,Metadata:&PodSandboxMetadata{Name:etcd-multinode-464644,Uid:d5b599b6912e0d4b30d78bf7b7e52672,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689620985507272286,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.174:2379,kubernet
es.io/config.hash: d5b599b6912e0d4b30d78bf7b7e52672,kubernetes.io/config.seen: 2023-07-17T19:09:44.953715894Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=e5b21b62-83bd-4e60-96a4-bdc313257061 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 19:11:07 multinode-464644 crio[715]: time="2023-07-17 19:11:07.758855442Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4f664fc7-6d1f-4d82-9d2a-16365f10589d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:11:07 multinode-464644 crio[715]: time="2023-07-17 19:11:07.758919085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4f664fc7-6d1f-4d82-9d2a-16365f10589d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:11:07 multinode-464644 crio[715]: time="2023-07-17 19:11:07.759185168Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e5a1b9381a3c9dbcba5140c6540e6ff279841f1225a29f653929db21f470461,PodSandboxId:bfebfb266bbe968061752fe1f6249f88a2dfad6fcab83b565942d284ebf4de95,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621063622542168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2f88e21062eb40c4833fd64c4418890d4e3b43799599b3703529f12097c960,PodSandboxId:2806036aae7210824873e26db35deb94e7b51f03204a3a9fd3ef4fec76c804e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621015251059246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab6fce4d081be62b023ac344e0f8ac748d4e8e6e364ad022743900004fb4cea,PodSandboxId:e40a866cdebe17bfd6c706bff422ef0bac20c3b71a1c4b4ddce124e384cc6f81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621015010579859,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d534ff22d37f42cf8e64d752ebeb215259953405d690e91febcc796b44d494b,PodSandboxId:2cfed4ce7b15452ec8e4ea5f375652bd0ddeb2ca6ef519d175f0533e4873486d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621012238322187,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d05f96defb555e6e60caf86ef6245356de0b5a8591daa82400f031201072af,PodSandboxId:544c8365f6c38fb59de23e7d18d35ca4bce903f75b14dbfd5be760173cd025ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621009791970901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6f6de8ab31f7b4843778e320dcdbf63bdeef7be1f4999e0c06d47d852de51f,PodSandboxId:45fdcad67a4a17de70b3159e1da0efc05ae7b7f52eec4db6157a504e77e9410f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689620986950329117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1358baba0d227ad1e2a3fc076c2a86954835a12f339b09355c75b75a8e1609c5,PodSandboxId:82bb9bff98d9fd8a319632b2f0937ca6b531dc80e32ce311517b9021956e0499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689620986388340821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f096
9feab8ff61a3ecabd,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819d9daca5c8a562cbe3bd6e305042e1bf981788a206bccb52e41cf01798f741,PodSandboxId:ae3eff3fd49fd3000f348d5a13baf55f90eba6c4a35434a2c906b1669a2ac17e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689620986424534536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io
.kubernetes.container.hash: cf0d9749,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ff68c0a594cf76b1e9ad2ecf972dfab0dd4b2c215658b9176f7fc1b416b4ece,PodSandboxId:fdd7d323a47560c17b8194ab4029ff652d275b1d48f67f9d87d77e3b485a315c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689620986273174595,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.
container.hash: 306844e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4f664fc7-6d1f-4d82-9d2a-16365f10589d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.049813779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=73248a5f-7bb2-4bd1-8967-253f4ab2e8d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.049936503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=73248a5f-7bb2-4bd1-8967-253f4ab2e8d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.050305937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e5a1b9381a3c9dbcba5140c6540e6ff279841f1225a29f653929db21f470461,PodSandboxId:bfebfb266bbe968061752fe1f6249f88a2dfad6fcab83b565942d284ebf4de95,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621063622542168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2f88e21062eb40c4833fd64c4418890d4e3b43799599b3703529f12097c960,PodSandboxId:2806036aae7210824873e26db35deb94e7b51f03204a3a9fd3ef4fec76c804e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621015251059246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab6fce4d081be62b023ac344e0f8ac748d4e8e6e364ad022743900004fb4cea,PodSandboxId:e40a866cdebe17bfd6c706bff422ef0bac20c3b71a1c4b4ddce124e384cc6f81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621015010579859,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d534ff22d37f42cf8e64d752ebeb215259953405d690e91febcc796b44d494b,PodSandboxId:2cfed4ce7b15452ec8e4ea5f375652bd0ddeb2ca6ef519d175f0533e4873486d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621012238322187,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d05f96defb555e6e60caf86ef6245356de0b5a8591daa82400f031201072af,PodSandboxId:544c8365f6c38fb59de23e7d18d35ca4bce903f75b14dbfd5be760173cd025ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621009791970901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6f6de8ab31f7b4843778e320dcdbf63bdeef7be1f4999e0c06d47d852de51f,PodSandboxId:45fdcad67a4a17de70b3159e1da0efc05ae7b7f52eec4db6157a504e77e9410f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689620986950329117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1358baba0d227ad1e2a3fc076c2a86954835a12f339b09355c75b75a8e1609c5,PodSandboxId:82bb9bff98d9fd8a319632b2f0937ca6b531dc80e32ce311517b9021956e0499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689620986388340821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f096
9feab8ff61a3ecabd,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819d9daca5c8a562cbe3bd6e305042e1bf981788a206bccb52e41cf01798f741,PodSandboxId:ae3eff3fd49fd3000f348d5a13baf55f90eba6c4a35434a2c906b1669a2ac17e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689620986424534536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io
.kubernetes.container.hash: cf0d9749,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ff68c0a594cf76b1e9ad2ecf972dfab0dd4b2c215658b9176f7fc1b416b4ece,PodSandboxId:fdd7d323a47560c17b8194ab4029ff652d275b1d48f67f9d87d77e3b485a315c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689620986273174595,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.
container.hash: 306844e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=73248a5f-7bb2-4bd1-8967-253f4ab2e8d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.102920226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a05f463c-b937-475f-858b-7480814a482e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.103012749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a05f463c-b937-475f-858b-7480814a482e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.103280566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e5a1b9381a3c9dbcba5140c6540e6ff279841f1225a29f653929db21f470461,PodSandboxId:bfebfb266bbe968061752fe1f6249f88a2dfad6fcab83b565942d284ebf4de95,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621063622542168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2f88e21062eb40c4833fd64c4418890d4e3b43799599b3703529f12097c960,PodSandboxId:2806036aae7210824873e26db35deb94e7b51f03204a3a9fd3ef4fec76c804e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621015251059246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab6fce4d081be62b023ac344e0f8ac748d4e8e6e364ad022743900004fb4cea,PodSandboxId:e40a866cdebe17bfd6c706bff422ef0bac20c3b71a1c4b4ddce124e384cc6f81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621015010579859,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d534ff22d37f42cf8e64d752ebeb215259953405d690e91febcc796b44d494b,PodSandboxId:2cfed4ce7b15452ec8e4ea5f375652bd0ddeb2ca6ef519d175f0533e4873486d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621012238322187,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d05f96defb555e6e60caf86ef6245356de0b5a8591daa82400f031201072af,PodSandboxId:544c8365f6c38fb59de23e7d18d35ca4bce903f75b14dbfd5be760173cd025ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621009791970901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6f6de8ab31f7b4843778e320dcdbf63bdeef7be1f4999e0c06d47d852de51f,PodSandboxId:45fdcad67a4a17de70b3159e1da0efc05ae7b7f52eec4db6157a504e77e9410f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689620986950329117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1358baba0d227ad1e2a3fc076c2a86954835a12f339b09355c75b75a8e1609c5,PodSandboxId:82bb9bff98d9fd8a319632b2f0937ca6b531dc80e32ce311517b9021956e0499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689620986388340821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f096
9feab8ff61a3ecabd,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819d9daca5c8a562cbe3bd6e305042e1bf981788a206bccb52e41cf01798f741,PodSandboxId:ae3eff3fd49fd3000f348d5a13baf55f90eba6c4a35434a2c906b1669a2ac17e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689620986424534536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io
.kubernetes.container.hash: cf0d9749,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ff68c0a594cf76b1e9ad2ecf972dfab0dd4b2c215658b9176f7fc1b416b4ece,PodSandboxId:fdd7d323a47560c17b8194ab4029ff652d275b1d48f67f9d87d77e3b485a315c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689620986273174595,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.
container.hash: 306844e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a05f463c-b937-475f-858b-7480814a482e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.139419716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0e211f94-3076-4267-b18d-145ab10c6543 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.139519616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0e211f94-3076-4267-b18d-145ab10c6543 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.139910994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e5a1b9381a3c9dbcba5140c6540e6ff279841f1225a29f653929db21f470461,PodSandboxId:bfebfb266bbe968061752fe1f6249f88a2dfad6fcab83b565942d284ebf4de95,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621063622542168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2f88e21062eb40c4833fd64c4418890d4e3b43799599b3703529f12097c960,PodSandboxId:2806036aae7210824873e26db35deb94e7b51f03204a3a9fd3ef4fec76c804e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621015251059246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab6fce4d081be62b023ac344e0f8ac748d4e8e6e364ad022743900004fb4cea,PodSandboxId:e40a866cdebe17bfd6c706bff422ef0bac20c3b71a1c4b4ddce124e384cc6f81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621015010579859,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d534ff22d37f42cf8e64d752ebeb215259953405d690e91febcc796b44d494b,PodSandboxId:2cfed4ce7b15452ec8e4ea5f375652bd0ddeb2ca6ef519d175f0533e4873486d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621012238322187,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d05f96defb555e6e60caf86ef6245356de0b5a8591daa82400f031201072af,PodSandboxId:544c8365f6c38fb59de23e7d18d35ca4bce903f75b14dbfd5be760173cd025ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621009791970901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6f6de8ab31f7b4843778e320dcdbf63bdeef7be1f4999e0c06d47d852de51f,PodSandboxId:45fdcad67a4a17de70b3159e1da0efc05ae7b7f52eec4db6157a504e77e9410f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689620986950329117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1358baba0d227ad1e2a3fc076c2a86954835a12f339b09355c75b75a8e1609c5,PodSandboxId:82bb9bff98d9fd8a319632b2f0937ca6b531dc80e32ce311517b9021956e0499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689620986388340821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f096
9feab8ff61a3ecabd,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819d9daca5c8a562cbe3bd6e305042e1bf981788a206bccb52e41cf01798f741,PodSandboxId:ae3eff3fd49fd3000f348d5a13baf55f90eba6c4a35434a2c906b1669a2ac17e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689620986424534536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io
.kubernetes.container.hash: cf0d9749,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ff68c0a594cf76b1e9ad2ecf972dfab0dd4b2c215658b9176f7fc1b416b4ece,PodSandboxId:fdd7d323a47560c17b8194ab4029ff652d275b1d48f67f9d87d77e3b485a315c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689620986273174595,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.
container.hash: 306844e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0e211f94-3076-4267-b18d-145ab10c6543 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.186362133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f772e54b-4a0c-47c0-b2c8-41292dcac204 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.186473130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f772e54b-4a0c-47c0-b2c8-41292dcac204 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.186776461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e5a1b9381a3c9dbcba5140c6540e6ff279841f1225a29f653929db21f470461,PodSandboxId:bfebfb266bbe968061752fe1f6249f88a2dfad6fcab83b565942d284ebf4de95,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621063622542168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2f88e21062eb40c4833fd64c4418890d4e3b43799599b3703529f12097c960,PodSandboxId:2806036aae7210824873e26db35deb94e7b51f03204a3a9fd3ef4fec76c804e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621015251059246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab6fce4d081be62b023ac344e0f8ac748d4e8e6e364ad022743900004fb4cea,PodSandboxId:e40a866cdebe17bfd6c706bff422ef0bac20c3b71a1c4b4ddce124e384cc6f81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621015010579859,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d534ff22d37f42cf8e64d752ebeb215259953405d690e91febcc796b44d494b,PodSandboxId:2cfed4ce7b15452ec8e4ea5f375652bd0ddeb2ca6ef519d175f0533e4873486d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621012238322187,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d05f96defb555e6e60caf86ef6245356de0b5a8591daa82400f031201072af,PodSandboxId:544c8365f6c38fb59de23e7d18d35ca4bce903f75b14dbfd5be760173cd025ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621009791970901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6f6de8ab31f7b4843778e320dcdbf63bdeef7be1f4999e0c06d47d852de51f,PodSandboxId:45fdcad67a4a17de70b3159e1da0efc05ae7b7f52eec4db6157a504e77e9410f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689620986950329117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1358baba0d227ad1e2a3fc076c2a86954835a12f339b09355c75b75a8e1609c5,PodSandboxId:82bb9bff98d9fd8a319632b2f0937ca6b531dc80e32ce311517b9021956e0499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689620986388340821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f096
9feab8ff61a3ecabd,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819d9daca5c8a562cbe3bd6e305042e1bf981788a206bccb52e41cf01798f741,PodSandboxId:ae3eff3fd49fd3000f348d5a13baf55f90eba6c4a35434a2c906b1669a2ac17e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689620986424534536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io
.kubernetes.container.hash: cf0d9749,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ff68c0a594cf76b1e9ad2ecf972dfab0dd4b2c215658b9176f7fc1b416b4ece,PodSandboxId:fdd7d323a47560c17b8194ab4029ff652d275b1d48f67f9d87d77e3b485a315c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689620986273174595,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.
container.hash: 306844e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f772e54b-4a0c-47c0-b2c8-41292dcac204 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.223019176Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e3843c92-d6b3-494f-a4cd-7382c5cc3eb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.223115972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e3843c92-d6b3-494f-a4cd-7382c5cc3eb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.223320590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e5a1b9381a3c9dbcba5140c6540e6ff279841f1225a29f653929db21f470461,PodSandboxId:bfebfb266bbe968061752fe1f6249f88a2dfad6fcab83b565942d284ebf4de95,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621063622542168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2f88e21062eb40c4833fd64c4418890d4e3b43799599b3703529f12097c960,PodSandboxId:2806036aae7210824873e26db35deb94e7b51f03204a3a9fd3ef4fec76c804e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621015251059246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab6fce4d081be62b023ac344e0f8ac748d4e8e6e364ad022743900004fb4cea,PodSandboxId:e40a866cdebe17bfd6c706bff422ef0bac20c3b71a1c4b4ddce124e384cc6f81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621015010579859,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d534ff22d37f42cf8e64d752ebeb215259953405d690e91febcc796b44d494b,PodSandboxId:2cfed4ce7b15452ec8e4ea5f375652bd0ddeb2ca6ef519d175f0533e4873486d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621012238322187,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d05f96defb555e6e60caf86ef6245356de0b5a8591daa82400f031201072af,PodSandboxId:544c8365f6c38fb59de23e7d18d35ca4bce903f75b14dbfd5be760173cd025ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621009791970901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6f6de8ab31f7b4843778e320dcdbf63bdeef7be1f4999e0c06d47d852de51f,PodSandboxId:45fdcad67a4a17de70b3159e1da0efc05ae7b7f52eec4db6157a504e77e9410f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689620986950329117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1358baba0d227ad1e2a3fc076c2a86954835a12f339b09355c75b75a8e1609c5,PodSandboxId:82bb9bff98d9fd8a319632b2f0937ca6b531dc80e32ce311517b9021956e0499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689620986388340821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f096
9feab8ff61a3ecabd,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819d9daca5c8a562cbe3bd6e305042e1bf981788a206bccb52e41cf01798f741,PodSandboxId:ae3eff3fd49fd3000f348d5a13baf55f90eba6c4a35434a2c906b1669a2ac17e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689620986424534536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io
.kubernetes.container.hash: cf0d9749,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ff68c0a594cf76b1e9ad2ecf972dfab0dd4b2c215658b9176f7fc1b416b4ece,PodSandboxId:fdd7d323a47560c17b8194ab4029ff652d275b1d48f67f9d87d77e3b485a315c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689620986273174595,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.
container.hash: 306844e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e3843c92-d6b3-494f-a4cd-7382c5cc3eb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.259316716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f1087e2c-b784-449c-85e0-1515cdce7a82 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.259413377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f1087e2c-b784-449c-85e0-1515cdce7a82 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.259683520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e5a1b9381a3c9dbcba5140c6540e6ff279841f1225a29f653929db21f470461,PodSandboxId:bfebfb266bbe968061752fe1f6249f88a2dfad6fcab83b565942d284ebf4de95,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621063622542168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2f88e21062eb40c4833fd64c4418890d4e3b43799599b3703529f12097c960,PodSandboxId:2806036aae7210824873e26db35deb94e7b51f03204a3a9fd3ef4fec76c804e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621015251059246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab6fce4d081be62b023ac344e0f8ac748d4e8e6e364ad022743900004fb4cea,PodSandboxId:e40a866cdebe17bfd6c706bff422ef0bac20c3b71a1c4b4ddce124e384cc6f81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621015010579859,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d534ff22d37f42cf8e64d752ebeb215259953405d690e91febcc796b44d494b,PodSandboxId:2cfed4ce7b15452ec8e4ea5f375652bd0ddeb2ca6ef519d175f0533e4873486d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621012238322187,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d05f96defb555e6e60caf86ef6245356de0b5a8591daa82400f031201072af,PodSandboxId:544c8365f6c38fb59de23e7d18d35ca4bce903f75b14dbfd5be760173cd025ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621009791970901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6f6de8ab31f7b4843778e320dcdbf63bdeef7be1f4999e0c06d47d852de51f,PodSandboxId:45fdcad67a4a17de70b3159e1da0efc05ae7b7f52eec4db6157a504e77e9410f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689620986950329117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1358baba0d227ad1e2a3fc076c2a86954835a12f339b09355c75b75a8e1609c5,PodSandboxId:82bb9bff98d9fd8a319632b2f0937ca6b531dc80e32ce311517b9021956e0499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689620986388340821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f096
9feab8ff61a3ecabd,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819d9daca5c8a562cbe3bd6e305042e1bf981788a206bccb52e41cf01798f741,PodSandboxId:ae3eff3fd49fd3000f348d5a13baf55f90eba6c4a35434a2c906b1669a2ac17e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689620986424534536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io
.kubernetes.container.hash: cf0d9749,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ff68c0a594cf76b1e9ad2ecf972dfab0dd4b2c215658b9176f7fc1b416b4ece,PodSandboxId:fdd7d323a47560c17b8194ab4029ff652d275b1d48f67f9d87d77e3b485a315c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689620986273174595,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.
container.hash: 306844e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f1087e2c-b784-449c-85e0-1515cdce7a82 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.293388072Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4e93c223-346e-4207-8bfe-fac0e6c87a48 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.293497123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4e93c223-346e-4207-8bfe-fac0e6c87a48 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:11:08 multinode-464644 crio[715]: time="2023-07-17 19:11:08.293819948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e5a1b9381a3c9dbcba5140c6540e6ff279841f1225a29f653929db21f470461,PodSandboxId:bfebfb266bbe968061752fe1f6249f88a2dfad6fcab83b565942d284ebf4de95,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621063622542168,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2f88e21062eb40c4833fd64c4418890d4e3b43799599b3703529f12097c960,PodSandboxId:2806036aae7210824873e26db35deb94e7b51f03204a3a9fd3ef4fec76c804e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621015251059246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab6fce4d081be62b023ac344e0f8ac748d4e8e6e364ad022743900004fb4cea,PodSandboxId:e40a866cdebe17bfd6c706bff422ef0bac20c3b71a1c4b4ddce124e384cc6f81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621015010579859,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d534ff22d37f42cf8e64d752ebeb215259953405d690e91febcc796b44d494b,PodSandboxId:2cfed4ce7b15452ec8e4ea5f375652bd0ddeb2ca6ef519d175f0533e4873486d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621012238322187,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d05f96defb555e6e60caf86ef6245356de0b5a8591daa82400f031201072af,PodSandboxId:544c8365f6c38fb59de23e7d18d35ca4bce903f75b14dbfd5be760173cd025ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621009791970901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6f6de8ab31f7b4843778e320dcdbf63bdeef7be1f4999e0c06d47d852de51f,PodSandboxId:45fdcad67a4a17de70b3159e1da0efc05ae7b7f52eec4db6157a504e77e9410f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689620986950329117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1358baba0d227ad1e2a3fc076c2a86954835a12f339b09355c75b75a8e1609c5,PodSandboxId:82bb9bff98d9fd8a319632b2f0937ca6b531dc80e32ce311517b9021956e0499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689620986388340821,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f096
9feab8ff61a3ecabd,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819d9daca5c8a562cbe3bd6e305042e1bf981788a206bccb52e41cf01798f741,PodSandboxId:ae3eff3fd49fd3000f348d5a13baf55f90eba6c4a35434a2c906b1669a2ac17e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689620986424534536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io
.kubernetes.container.hash: cf0d9749,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ff68c0a594cf76b1e9ad2ecf972dfab0dd4b2c215658b9176f7fc1b416b4ece,PodSandboxId:fdd7d323a47560c17b8194ab4029ff652d275b1d48f67f9d87d77e3b485a315c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689620986273174595,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.
container.hash: 306844e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4e93c223-346e-4207-8bfe-fac0e6c87a48 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	3e5a1b9381a3c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   bfebfb266bbe9
	ec2f88e21062e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      53 seconds ago       Running             coredns                   0                   2806036aae721
	5ab6fce4d081b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      53 seconds ago       Running             storage-provisioner       0                   e40a866cdebe1
	2d534ff22d37f       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      56 seconds ago       Running             kindnet-cni               0                   2cfed4ce7b154
	a4d05f96defb5       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      58 seconds ago       Running             kube-proxy                0                   544c8365f6c38
	8f6f6de8ab31f       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      About a minute ago   Running             kube-scheduler            0                   45fdcad67a4a1
	819d9daca5c8a       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      About a minute ago   Running             etcd                      0                   ae3eff3fd49fd
	1358baba0d227       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      About a minute ago   Running             kube-controller-manager   0                   82bb9bff98d9f
	5ff68c0a594cf       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      About a minute ago   Running             kube-apiserver            0                   fdd7d323a4756
	
	* 
	* ==> coredns [ec2f88e21062eb40c4833fd64c4418890d4e3b43799599b3703529f12097c960] <==
	* [INFO] 10.244.1.2:39536 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163124s
	[INFO] 10.244.0.3:53596 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179277s
	[INFO] 10.244.0.3:35020 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003709563s
	[INFO] 10.244.0.3:50168 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097299s
	[INFO] 10.244.0.3:33632 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091111s
	[INFO] 10.244.0.3:43368 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002635488s
	[INFO] 10.244.0.3:55326 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007831s
	[INFO] 10.244.0.3:51304 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100466s
	[INFO] 10.244.0.3:49521 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000038929s
	[INFO] 10.244.1.2:38175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165058s
	[INFO] 10.244.1.2:43422 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000264005s
	[INFO] 10.244.1.2:44214 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175833s
	[INFO] 10.244.1.2:45292 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110822s
	[INFO] 10.244.0.3:50245 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110413s
	[INFO] 10.244.0.3:58847 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215807s
	[INFO] 10.244.0.3:35024 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008991s
	[INFO] 10.244.0.3:55978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093336s
	[INFO] 10.244.1.2:40866 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178309s
	[INFO] 10.244.1.2:39312 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000164081s
	[INFO] 10.244.1.2:41647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169688s
	[INFO] 10.244.1.2:46077 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151418s
	[INFO] 10.244.0.3:45459 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131521s
	[INFO] 10.244.0.3:41155 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000238663s
	[INFO] 10.244.0.3:60826 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000072074s
	[INFO] 10.244.0.3:41045 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077362s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-464644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-464644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=multinode-464644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T19_09_55_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:09:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-464644
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 19:11:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 19:10:13 +0000   Mon, 17 Jul 2023 19:09:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 19:10:13 +0000   Mon, 17 Jul 2023 19:09:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 19:10:13 +0000   Mon, 17 Jul 2023 19:09:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 19:10:13 +0000   Mon, 17 Jul 2023 19:10:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    multinode-464644
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 87d391e2c653469e8471a8f89fe7ad1d
	  System UUID:                87d391e2-c653-469e-8471-a8f89fe7ad1d
	  Boot ID:                    ee476281-6469-467e-b16c-e4c7f6e7b53e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-jgj4t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5d78c9869d-wqj4s                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     60s
	  kube-system                 etcd-multinode-464644                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         74s
	  kube-system                 kindnet-2tp5c                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      60s
	  kube-system                 kube-apiserver-multinode-464644             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-multinode-464644    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-proxy-qwsn5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-multinode-464644             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 58s                kube-proxy       
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node multinode-464644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node multinode-464644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node multinode-464644 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 74s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s                kubelet          Node multinode-464644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s                kubelet          Node multinode-464644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s                kubelet          Node multinode-464644 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           61s                node-controller  Node multinode-464644 event: Registered Node multinode-464644 in Controller
	  Normal  NodeReady                55s                kubelet          Node multinode-464644 status is now: NodeReady
	
	
	Name:               multinode-464644-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-464644-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:10:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-464644-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 19:11:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 19:10:59 +0000   Mon, 17 Jul 2023 19:10:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 19:10:59 +0000   Mon, 17 Jul 2023 19:10:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 19:10:59 +0000   Mon, 17 Jul 2023 19:10:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 19:10:59 +0000   Mon, 17 Jul 2023 19:10:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.49
	  Hostname:    multinode-464644-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 abd9ef3b2307468e830a565575aeab4d
	  System UUID:                abd9ef3b-2307-468e-830a-565575aeab4d
	  Boot ID:                    c2528652-f898-4c51-b8a2-3ef727bc0aaa
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-bjpl2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-t77xh              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16s
	  kube-system                 kube-proxy-j6ds6           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientMemory  17s (x5 over 18s)  kubelet          Node multinode-464644-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x5 over 18s)  kubelet          Node multinode-464644-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x5 over 18s)  kubelet          Node multinode-464644-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16s                node-controller  Node multinode-464644-m02 event: Registered Node multinode-464644-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-464644-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Jul17 19:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075745] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.399299] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.605163] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.161715] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.022358] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.376894] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.105619] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.167081] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.116317] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.236192] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[  +9.462118] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +9.837167] systemd-fstab-generator[1264]: Ignoring "noauto" for root device
	[Jul17 19:10] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [819d9daca5c8a562cbe3bd6e305042e1bf981788a206bccb52e41cf01798f741] <==
	* {"level":"info","ts":"2023-07-17T19:09:48.604Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:09:48.604Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T19:09:48.604Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T19:10:08.717Z","caller":"traceutil/trace.go:171","msg":"trace[1736400257] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"100.439849ms","start":"2023-07-17T19:10:08.616Z","end":"2023-07-17T19:10:08.717Z","steps":["trace[1736400257] 'process raft request'  (duration: 82.246365ms)","trace[1736400257] 'compare'  (duration: 11.831068ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T19:10:08.727Z","caller":"traceutil/trace.go:171","msg":"trace[627502854] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"102.023867ms","start":"2023-07-17T19:10:08.625Z","end":"2023-07-17T19:10:08.727Z","steps":["trace[627502854] 'process raft request'  (duration: 101.941757ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:10:52.037Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"291.002315ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8360802301232884155 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:74078965424129ba>","response":"size:40"}
	{"level":"warn","ts":"2023-07-17T19:10:52.037Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:10:51.633Z","time spent":"404.128232ms","remote":"127.0.0.1:42028","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2023-07-17T19:10:52.037Z","caller":"traceutil/trace.go:171","msg":"trace[269449378] linearizableReadLoop","detail":"{readStateIndex:506; appliedIndex:505; }","duration":"372.735722ms","start":"2023-07-17T19:10:51.664Z","end":"2023-07-17T19:10:52.037Z","steps":["trace[269449378] 'read index received'  (duration: 18.380151ms)","trace[269449378] 'applied index is now lower than readState.Index'  (duration: 354.354118ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T19:10:52.037Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"372.989833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-464644-m02\" ","response":"range_response_count:1 size:1987"}
	{"level":"info","ts":"2023-07-17T19:10:52.037Z","caller":"traceutil/trace.go:171","msg":"trace[735003721] range","detail":"{range_begin:/registry/minions/multinode-464644-m02; range_end:; response_count:1; response_revision:485; }","duration":"373.062468ms","start":"2023-07-17T19:10:51.664Z","end":"2023-07-17T19:10:52.037Z","steps":["trace[735003721] 'agreement among raft nodes before linearized reading'  (duration: 372.929184ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:10:52.037Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:10:51.664Z","time spent":"373.101374ms","remote":"127.0.0.1:42048","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":2009,"request content":"key:\"/registry/minions/multinode-464644-m02\" "}
	{"level":"info","ts":"2023-07-17T19:10:52.038Z","caller":"traceutil/trace.go:171","msg":"trace[1645645407] transaction","detail":"{read_only:false; number_of_response:1; response_revision:486; }","duration":"341.777416ms","start":"2023-07-17T19:10:51.696Z","end":"2023-07-17T19:10:52.038Z","steps":["trace[1645645407] 'process raft request'  (duration: 341.740449ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T19:10:52.038Z","caller":"traceutil/trace.go:171","msg":"trace[1895647944] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"347.638954ms","start":"2023-07-17T19:10:51.690Z","end":"2023-07-17T19:10:52.038Z","steps":["trace[1895647944] 'process raft request'  (duration: 346.847163ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:10:52.039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"366.374855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-464644-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T19:10:52.039Z","caller":"traceutil/trace.go:171","msg":"trace[1863078891] range","detail":"{range_begin:/registry/csinodes/multinode-464644-m02; range_end:; response_count:0; response_revision:487; }","duration":"366.435451ms","start":"2023-07-17T19:10:51.672Z","end":"2023-07-17T19:10:52.039Z","steps":["trace[1863078891] 'agreement among raft nodes before linearized reading'  (duration: 366.304518ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:10:52.039Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:10:51.672Z","time spent":"366.548035ms","remote":"127.0.0.1:42104","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":0,"response size":27,"request content":"key:\"/registry/csinodes/multinode-464644-m02\" "}
	{"level":"info","ts":"2023-07-17T19:10:52.039Z","caller":"traceutil/trace.go:171","msg":"trace[957033391] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"165.77838ms","start":"2023-07-17T19:10:51.873Z","end":"2023-07-17T19:10:52.039Z","steps":["trace[957033391] 'process raft request'  (duration: 165.14207ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:10:52.040Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.964809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-464644-m02\" ","response":"range_response_count:1 size:2156"}
	{"level":"info","ts":"2023-07-17T19:10:52.040Z","caller":"traceutil/trace.go:171","msg":"trace[1333464299] range","detail":"{range_begin:/registry/minions/multinode-464644-m02; range_end:; response_count:1; response_revision:487; }","duration":"341.237076ms","start":"2023-07-17T19:10:51.699Z","end":"2023-07-17T19:10:52.040Z","steps":["trace[1333464299] 'agreement among raft nodes before linearized reading'  (duration: 340.925609ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:10:52.040Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:10:51.699Z","time spent":"341.311427ms","remote":"127.0.0.1:42048","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":2178,"request content":"key:\"/registry/minions/multinode-464644-m02\" "}
	{"level":"warn","ts":"2023-07-17T19:10:52.041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.734835ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T19:10:52.042Z","caller":"traceutil/trace.go:171","msg":"trace[802850548] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:487; }","duration":"346.143656ms","start":"2023-07-17T19:10:51.696Z","end":"2023-07-17T19:10:52.042Z","steps":["trace[802850548] 'agreement among raft nodes before linearized reading'  (duration: 345.420317ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:10:52.042Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:10:51.696Z","time spent":"346.57921ms","remote":"127.0.0.1:42032","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":27,"request content":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" "}
	{"level":"warn","ts":"2023-07-17T19:10:52.044Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:10:51.696Z","time spent":"341.89611ms","remote":"127.0.0.1:42048","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":42,"response count":0,"response size":2189,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-464644-m02\" mod_revision:485 > success:<request_put:<key:\"/registry/minions/multinode-464644-m02\" value_size:2098 >> failure:<request_range:<key:\"/registry/minions/multinode-464644-m02\" > >"}
	{"level":"warn","ts":"2023-07-17T19:10:52.044Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:10:51.690Z","time spent":"347.925214ms","remote":"127.0.0.1:42048","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2141,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-464644-m02\" mod_revision:485 > success:<request_put:<key:\"/registry/minions/multinode-464644-m02\" value_size:2095 >> failure:<request_range:<key:\"/registry/minions/multinode-464644-m02\" > >"}
	
	* 
	* ==> kernel <==
	*  19:11:08 up 2 min,  0 users,  load average: 1.58, 0.62, 0.23
	Linux multinode-464644 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [2d534ff22d37f42cf8e64d752ebeb215259953405d690e91febcc796b44d494b] <==
	* I0717 19:10:12.979928       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0717 19:10:12.980099       1 main.go:107] hostIP = 192.168.39.174
	podIP = 192.168.39.174
	I0717 19:10:12.980548       1 main.go:116] setting mtu 1500 for CNI 
	I0717 19:10:12.980658       1 main.go:146] kindnetd IP family: "ipv4"
	I0717 19:10:12.980707       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0717 19:10:13.575837       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0717 19:10:13.575892       1 main.go:227] handling current node
	I0717 19:10:23.591162       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0717 19:10:23.591242       1 main.go:227] handling current node
	I0717 19:10:33.604427       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0717 19:10:33.604558       1 main.go:227] handling current node
	I0717 19:10:43.610007       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0717 19:10:43.610201       1 main.go:227] handling current node
	I0717 19:10:53.618473       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0717 19:10:53.618570       1 main.go:227] handling current node
	I0717 19:10:53.619045       1 main.go:223] Handling node with IPs: map[192.168.39.49:{}]
	I0717 19:10:53.619193       1 main.go:250] Node multinode-464644-m02 has CIDR [10.244.1.0/24] 
	I0717 19:10:53.619717       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.49 Flags: [] Table: 0} 
	I0717 19:11:03.639778       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0717 19:11:03.639839       1 main.go:227] handling current node
	I0717 19:11:03.639852       1 main.go:223] Handling node with IPs: map[192.168.39.49:{}]
	I0717 19:11:03.639859       1 main.go:250] Node multinode-464644-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [5ff68c0a594cf76b1e9ad2ecf972dfab0dd4b2c215658b9176f7fc1b416b4ece] <==
	* I0717 19:09:50.757818       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0717 19:09:50.772792       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 19:09:50.773016       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0717 19:09:50.773053       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0717 19:09:50.775449       1 shared_informer.go:318] Caches are synced for configmaps
	I0717 19:09:50.777047       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 19:09:50.789771       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0717 19:09:50.796449       1 controller.go:624] quota admission added evaluator for: namespaces
	I0717 19:09:50.845057       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 19:09:51.281142       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 19:09:51.586920       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 19:09:51.594083       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 19:09:51.594130       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 19:09:52.508571       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 19:09:52.606065       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 19:09:52.713118       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0717 19:09:52.725778       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.39.174]
	I0717 19:09:52.727118       1 controller.go:624] quota admission added evaluator for: endpoints
	I0717 19:09:52.735267       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 19:09:53.676086       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 19:09:54.198585       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 19:09:54.229937       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0717 19:09:54.243243       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 19:10:08.245862       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0717 19:10:08.294115       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [1358baba0d227ad1e2a3fc076c2a86954835a12f339b09355c75b75a8e1609c5] <==
	* I0717 19:10:07.640476       1 shared_informer.go:318] Caches are synced for persistent volume
	I0717 19:10:07.644262       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 19:10:07.670775       1 shared_informer.go:318] Caches are synced for expand
	I0717 19:10:08.038279       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 19:10:08.038325       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 19:10:08.048458       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 19:10:08.256377       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0717 19:10:08.344543       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2tp5c"
	I0717 19:10:08.360091       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qwsn5"
	I0717 19:10:08.365175       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0717 19:10:08.923836       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-tbksm"
	I0717 19:10:09.051183       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-wqj4s"
	I0717 19:10:09.251453       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-tbksm"
	I0717 19:10:17.457755       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0717 19:10:51.685994       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-464644-m02\" does not exist"
	I0717 19:10:52.050917       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-464644-m02" podCIDRs=[10.244.1.0/24]
	I0717 19:10:52.070448       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-t77xh"
	I0717 19:10:52.092861       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-j6ds6"
	I0717 19:10:52.464223       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-464644-m02"
	I0717 19:10:52.464725       1 event.go:307] "Event occurred" object="multinode-464644-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-464644-m02 event: Registered Node multinode-464644-m02 in Controller"
	W0717 19:10:59.176808       1 topologycache.go:232] Can't get CPU or zone information for multinode-464644-m02 node
	I0717 19:11:01.677083       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0717 19:11:01.699217       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-bjpl2"
	I0717 19:11:01.722673       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-jgj4t"
	I0717 19:11:02.477893       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-bjpl2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-bjpl2"
	
	* 
	* ==> kube-proxy [a4d05f96defb555e6e60caf86ef6245356de0b5a8591daa82400f031201072af] <==
	* I0717 19:10:10.102156       1 node.go:141] Successfully retrieved node IP: 192.168.39.174
	I0717 19:10:10.102868       1 server_others.go:110] "Detected node IP" address="192.168.39.174"
	I0717 19:10:10.103036       1 server_others.go:554] "Using iptables proxy"
	I0717 19:10:10.158958       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 19:10:10.159045       1 server_others.go:192] "Using iptables Proxier"
	I0717 19:10:10.159746       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:10:10.161132       1 server.go:658] "Version info" version="v1.27.3"
	I0717 19:10:10.161185       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:10:10.163552       1 config.go:188] "Starting service config controller"
	I0717 19:10:10.163982       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 19:10:10.164325       1 config.go:97] "Starting endpoint slice config controller"
	I0717 19:10:10.164364       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 19:10:10.166249       1 config.go:315] "Starting node config controller"
	I0717 19:10:10.166382       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 19:10:10.264400       1 shared_informer.go:318] Caches are synced for service config
	I0717 19:10:10.264514       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 19:10:10.266577       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8f6f6de8ab31f7b4843778e320dcdbf63bdeef7be1f4999e0c06d47d852de51f] <==
	* W0717 19:09:50.818236       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 19:09:50.818264       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 19:09:50.819808       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 19:09:50.819868       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 19:09:51.741877       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 19:09:51.741939       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 19:09:51.756087       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 19:09:51.756181       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 19:09:51.807017       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 19:09:51.807069       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 19:09:51.857670       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 19:09:51.857723       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 19:09:51.891415       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 19:09:51.891702       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 19:09:51.916786       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 19:09:51.916901       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 19:09:51.956463       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 19:09:51.956590       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 19:09:52.113541       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 19:09:52.113569       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 19:09:52.138026       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 19:09:52.138080       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:09:52.147263       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 19:09:52.147393       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0717 19:09:54.204087       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 19:09:17 UTC, ends at Mon 2023-07-17 19:11:08 UTC. --
	Jul 17 19:10:08 multinode-464644 kubelet[1271]: I0717 19:10:08.690402    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50e3f5e0-00d9-4412-b4de-649bc29733e9-xtables-lock\") pod \"kube-proxy-qwsn5\" (UID: \"50e3f5e0-00d9-4412-b4de-649bc29733e9\") " pod="kube-system/kube-proxy-qwsn5"
	Jul 17 19:10:08 multinode-464644 kubelet[1271]: I0717 19:10:08.690452    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4e4881b0-4a20-4588-a87b-d2ba9c9b6939-cni-cfg\") pod \"kindnet-2tp5c\" (UID: \"4e4881b0-4a20-4588-a87b-d2ba9c9b6939\") " pod="kube-system/kindnet-2tp5c"
	Jul 17 19:10:08 multinode-464644 kubelet[1271]: I0717 19:10:08.690478    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e4881b0-4a20-4588-a87b-d2ba9c9b6939-xtables-lock\") pod \"kindnet-2tp5c\" (UID: \"4e4881b0-4a20-4588-a87b-d2ba9c9b6939\") " pod="kube-system/kindnet-2tp5c"
	Jul 17 19:10:08 multinode-464644 kubelet[1271]: I0717 19:10:08.690499    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e4881b0-4a20-4588-a87b-d2ba9c9b6939-lib-modules\") pod \"kindnet-2tp5c\" (UID: \"4e4881b0-4a20-4588-a87b-d2ba9c9b6939\") " pod="kube-system/kindnet-2tp5c"
	Jul 17 19:10:08 multinode-464644 kubelet[1271]: I0717 19:10:08.690523    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmd5j\" (UniqueName: \"kubernetes.io/projected/50e3f5e0-00d9-4412-b4de-649bc29733e9-kube-api-access-xmd5j\") pod \"kube-proxy-qwsn5\" (UID: \"50e3f5e0-00d9-4412-b4de-649bc29733e9\") " pod="kube-system/kube-proxy-qwsn5"
	Jul 17 19:10:08 multinode-464644 kubelet[1271]: I0717 19:10:08.690557    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50e3f5e0-00d9-4412-b4de-649bc29733e9-kube-proxy\") pod \"kube-proxy-qwsn5\" (UID: \"50e3f5e0-00d9-4412-b4de-649bc29733e9\") " pod="kube-system/kube-proxy-qwsn5"
	Jul 17 19:10:08 multinode-464644 kubelet[1271]: I0717 19:10:08.690576    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50e3f5e0-00d9-4412-b4de-649bc29733e9-lib-modules\") pod \"kube-proxy-qwsn5\" (UID: \"50e3f5e0-00d9-4412-b4de-649bc29733e9\") " pod="kube-system/kube-proxy-qwsn5"
	Jul 17 19:10:08 multinode-464644 kubelet[1271]: I0717 19:10:08.690698    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz822\" (UniqueName: \"kubernetes.io/projected/4e4881b0-4a20-4588-a87b-d2ba9c9b6939-kube-api-access-pz822\") pod \"kindnet-2tp5c\" (UID: \"4e4881b0-4a20-4588-a87b-d2ba9c9b6939\") " pod="kube-system/kindnet-2tp5c"
	Jul 17 19:10:13 multinode-464644 kubelet[1271]: I0717 19:10:13.554165    1271 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qwsn5" podStartSLOduration=5.554098904 podCreationTimestamp="2023-07-17 19:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 19:10:10.542685128 +0000 UTC m=+16.372372093" watchObservedRunningTime="2023-07-17 19:10:13.554098904 +0000 UTC m=+19.383785869"
	Jul 17 19:10:13 multinode-464644 kubelet[1271]: I0717 19:10:13.983420    1271 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 17 19:10:14 multinode-464644 kubelet[1271]: I0717 19:10:14.047323    1271 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-2tp5c" podStartSLOduration=6.047287018 podCreationTimestamp="2023-07-17 19:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 19:10:13.554592204 +0000 UTC m=+19.384279211" watchObservedRunningTime="2023-07-17 19:10:14.047287018 +0000 UTC m=+19.876973983"
	Jul 17 19:10:14 multinode-464644 kubelet[1271]: I0717 19:10:14.047435    1271 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:10:14 multinode-464644 kubelet[1271]: I0717 19:10:14.057077    1271 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:10:14 multinode-464644 kubelet[1271]: I0717 19:10:14.130058    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bd46cf29-49d3-4c0a-908e-a323a525d8d5-tmp\") pod \"storage-provisioner\" (UID: \"bd46cf29-49d3-4c0a-908e-a323a525d8d5\") " pod="kube-system/storage-provisioner"
	Jul 17 19:10:14 multinode-464644 kubelet[1271]: I0717 19:10:14.130189    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991-config-volume\") pod \"coredns-5d78c9869d-wqj4s\" (UID: \"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991\") " pod="kube-system/coredns-5d78c9869d-wqj4s"
	Jul 17 19:10:14 multinode-464644 kubelet[1271]: I0717 19:10:14.130236    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvcp4\" (UniqueName: \"kubernetes.io/projected/a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991-kube-api-access-gvcp4\") pod \"coredns-5d78c9869d-wqj4s\" (UID: \"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991\") " pod="kube-system/coredns-5d78c9869d-wqj4s"
	Jul 17 19:10:14 multinode-464644 kubelet[1271]: I0717 19:10:14.130258    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc5mn\" (UniqueName: \"kubernetes.io/projected/bd46cf29-49d3-4c0a-908e-a323a525d8d5-kube-api-access-dc5mn\") pod \"storage-provisioner\" (UID: \"bd46cf29-49d3-4c0a-908e-a323a525d8d5\") " pod="kube-system/storage-provisioner"
	Jul 17 19:10:15 multinode-464644 kubelet[1271]: I0717 19:10:15.594066    1271 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.594026331 podCreationTimestamp="2023-07-17 19:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 19:10:15.593395488 +0000 UTC m=+21.423082453" watchObservedRunningTime="2023-07-17 19:10:15.594026331 +0000 UTC m=+21.423713293"
	Jul 17 19:10:15 multinode-464644 kubelet[1271]: I0717 19:10:15.594155    1271 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-wqj4s" podStartSLOduration=7.594138841 podCreationTimestamp="2023-07-17 19:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 19:10:15.566308793 +0000 UTC m=+21.395995757" watchObservedRunningTime="2023-07-17 19:10:15.594138841 +0000 UTC m=+21.423825805"
	Jul 17 19:10:54 multinode-464644 kubelet[1271]: E0717 19:10:54.423259    1271 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 19:10:54 multinode-464644 kubelet[1271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:10:54 multinode-464644 kubelet[1271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:10:54 multinode-464644 kubelet[1271]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 19:11:01 multinode-464644 kubelet[1271]: I0717 19:11:01.738536    1271 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:11:01 multinode-464644 kubelet[1271]: I0717 19:11:01.884949    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n97jv\" (UniqueName: \"kubernetes.io/projected/fe524d58-c36b-41da-82eb-f0336652f7c2-kube-api-access-n97jv\") pod \"busybox-67b7f59bb-jgj4t\" (UID: \"fe524d58-c36b-41da-82eb-f0336652f7c2\") " pod="default/busybox-67b7f59bb-jgj4t"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-464644 -n multinode-464644
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-464644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (684.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-464644
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-464644
E0717 19:13:01.330545 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-464644: exit status 82 (2m1.664428632s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-464644"  ...
	* Stopping node "multinode-464644"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-464644" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-464644 --wait=true -v=8 --alsologtostderr
E0717 19:16:00.134077 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 19:16:03.521177 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:18:01.330802 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:19:24.378080 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:21:00.133797 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 19:21:03.521695 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:22:26.568530 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:23:01.330879 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-464644 --wait=true -v=8 --alsologtostderr: (9m19.846881519s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-464644
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-464644 -n multinode-464644
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-464644 logs -n 25: (1.766253821s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-464644 ssh -n                                                                 | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | multinode-464644-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-464644 cp multinode-464644-m02:/home/docker/cp-test.txt                       | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile291099792/001/cp-test_multinode-464644-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-464644 ssh -n                                                                 | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | multinode-464644-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-464644 cp multinode-464644-m02:/home/docker/cp-test.txt                       | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | multinode-464644:/home/docker/cp-test_multinode-464644-m02_multinode-464644.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-464644 ssh -n                                                                 | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | multinode-464644-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-464644 ssh -n multinode-464644 sudo cat                                       | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | /home/docker/cp-test_multinode-464644-m02_multinode-464644.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-464644 cp multinode-464644-m02:/home/docker/cp-test.txt                       | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | multinode-464644-m03:/home/docker/cp-test_multinode-464644-m02_multinode-464644-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-464644 ssh -n                                                                 | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | multinode-464644-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-464644 ssh -n multinode-464644-m03 sudo cat                                   | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | /home/docker/cp-test_multinode-464644-m02_multinode-464644-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-464644 cp testdata/cp-test.txt                                                | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | multinode-464644-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-464644 ssh -n                                                                 | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | multinode-464644-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-464644 cp multinode-464644-m03:/home/docker/cp-test.txt                       | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile291099792/001/cp-test_multinode-464644-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-464644 ssh -n                                                                 | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | multinode-464644-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-464644 cp multinode-464644-m03:/home/docker/cp-test.txt                       | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | multinode-464644:/home/docker/cp-test_multinode-464644-m03_multinode-464644.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-464644 ssh -n                                                                 | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | multinode-464644-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-464644 ssh -n multinode-464644 sudo cat                                       | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | /home/docker/cp-test_multinode-464644-m03_multinode-464644.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-464644 cp multinode-464644-m03:/home/docker/cp-test.txt                       | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | multinode-464644-m02:/home/docker/cp-test_multinode-464644-m03_multinode-464644-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-464644 ssh -n                                                                 | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | multinode-464644-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-464644 ssh -n multinode-464644-m02 sudo cat                                   | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:11 UTC |
	|         | /home/docker/cp-test_multinode-464644-m03_multinode-464644-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-464644 node stop m03                                                          | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:11 UTC | 17 Jul 23 19:12 UTC |
	| node    | multinode-464644 node start                                                             | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:12 UTC | 17 Jul 23 19:12 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-464644                                                                | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:12 UTC |                     |
	| stop    | -p multinode-464644                                                                     | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:12 UTC |                     |
	| start   | -p multinode-464644                                                                     | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:14 UTC | 17 Jul 23 19:23 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-464644                                                                | multinode-464644 | jenkins | v1.30.1 | 17 Jul 23 19:23 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 19:14:37
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:14:37.054636 1084713 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:14:37.054790 1084713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:14:37.054802 1084713 out.go:309] Setting ErrFile to fd 2...
	I0717 19:14:37.054809 1084713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:14:37.055024 1084713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:14:37.055653 1084713 out.go:303] Setting JSON to false
	I0717 19:14:37.056750 1084713 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":14228,"bootTime":1689607049,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:14:37.056822 1084713 start.go:138] virtualization: kvm guest
	I0717 19:14:37.060058 1084713 out.go:177] * [multinode-464644] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:14:37.062238 1084713 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:14:37.062310 1084713 notify.go:220] Checking for updates...
	I0717 19:14:37.064767 1084713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:14:37.066964 1084713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:14:37.069020 1084713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:14:37.070745 1084713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:14:37.072776 1084713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:14:37.075061 1084713 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:14:37.075250 1084713 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:14:37.076302 1084713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:14:37.076391 1084713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:14:37.092501 1084713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I0717 19:14:37.092993 1084713 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:14:37.093818 1084713 main.go:141] libmachine: Using API Version  1
	I0717 19:14:37.093847 1084713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:14:37.094282 1084713 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:14:37.094584 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:14:37.135015 1084713 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:14:37.136765 1084713 start.go:298] selected driver: kvm2
	I0717 19:14:37.136783 1084713 start.go:880] validating driver "kvm2" against &{Name:multinode-464644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-46464
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.49 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.247 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0}
	I0717 19:14:37.136917 1084713 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:14:37.137235 1084713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:14:37.137331 1084713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:14:37.154324 1084713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0717 19:14:37.155054 1084713 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:14:37.155103 1084713 cni.go:84] Creating CNI manager for ""
	I0717 19:14:37.155150 1084713 cni.go:137] 3 nodes found, recommending kindnet
	I0717 19:14:37.155168 1084713 start_flags.go:319] config:
	{Name:multinode-464644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-464644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.49 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.247 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:fal
se metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:14:37.155451 1084713 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:14:37.157882 1084713 out.go:177] * Starting control plane node multinode-464644 in cluster multinode-464644
	I0717 19:14:37.159517 1084713 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:14:37.159585 1084713 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 19:14:37.159608 1084713 cache.go:57] Caching tarball of preloaded images
	I0717 19:14:37.159710 1084713 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:14:37.159725 1084713 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:14:37.159914 1084713 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/config.json ...
	I0717 19:14:37.160188 1084713 start.go:365] acquiring machines lock for multinode-464644: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:14:37.160246 1084713 start.go:369] acquired machines lock for "multinode-464644" in 31.586µs
	I0717 19:14:37.160268 1084713 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:14:37.160276 1084713 fix.go:54] fixHost starting: 
	I0717 19:14:37.160596 1084713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:14:37.160622 1084713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:14:37.175939 1084713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I0717 19:14:37.176448 1084713 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:14:37.177003 1084713 main.go:141] libmachine: Using API Version  1
	I0717 19:14:37.177041 1084713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:14:37.177429 1084713 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:14:37.177671 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:14:37.177856 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetState
	I0717 19:14:37.179973 1084713 fix.go:102] recreateIfNeeded on multinode-464644: state=Running err=<nil>
	W0717 19:14:37.180018 1084713 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:14:37.182804 1084713 out.go:177] * Updating the running kvm2 "multinode-464644" VM ...
	I0717 19:14:37.184600 1084713 machine.go:88] provisioning docker machine ...
	I0717 19:14:37.184640 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:14:37.185015 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetMachineName
	I0717 19:14:37.185304 1084713 buildroot.go:166] provisioning hostname "multinode-464644"
	I0717 19:14:37.185330 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetMachineName
	I0717 19:14:37.185551 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:14:37.188729 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:14:37.189292 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:14:37.189330 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:14:37.189512 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:14:37.189749 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:14:37.189949 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:14:37.190157 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:14:37.190353 1084713 main.go:141] libmachine: Using SSH client type: native
	I0717 19:14:37.190832 1084713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0717 19:14:37.190850 1084713 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-464644 && echo "multinode-464644" | sudo tee /etc/hostname
	I0717 19:14:55.557936 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:01.637882 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:04.709970 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:10.789941 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:13.862000 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:19.941983 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:23.013997 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:29.093966 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:32.165984 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:38.245924 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:41.317921 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:47.397923 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:50.469873 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:56.549943 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:15:59.622015 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:16:05.702014 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:16:08.773899 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:16:14.853900 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:16:17.925919 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:16:24.005936 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:16:27.077995 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:16:33.157979 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:16:36.229966 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:16:42.309912 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:16:45.381937 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:16:51.461918 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:16:54.534028 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:00.613962 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:03.685944 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:09.765897 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:12.837896 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:18.917888 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:21.989961 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:28.069903 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:31.141884 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:37.221920 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:40.293912 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:46.374037 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:49.445998 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:55.525945 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:17:58.597935 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:18:04.677893 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:18:07.750000 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:18:13.829978 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:18:16.901948 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:18:22.982027 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:18:26.053891 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:18:32.133894 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:18:35.206065 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:18:41.286002 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:18:44.357993 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:18:50.438046 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:18:53.509948 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:18:59.589911 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:19:02.662050 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:19:08.741922 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:19:11.813860 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:19:17.893936 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:19:20.965838 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:19:27.045881 1084713 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0717 19:19:30.048616 1084713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:19:30.048715 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:19:30.051645 1084713 machine.go:91] provisioned docker machine in 4m52.867003112s
	I0717 19:19:30.051746 1084713 fix.go:56] fixHost completed within 4m52.89147257s
	I0717 19:19:30.051755 1084713 start.go:83] releasing machines lock for "multinode-464644", held for 4m52.891496393s
	W0717 19:19:30.051792 1084713 start.go:688] error starting host: provision: host is not running
	W0717 19:19:30.051961 1084713 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 19:19:30.051971 1084713 start.go:703] Will try again in 5 seconds ...
	I0717 19:19:35.054135 1084713 start.go:365] acquiring machines lock for multinode-464644: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:19:35.054340 1084713 start.go:369] acquired machines lock for "multinode-464644" in 112.95µs
	I0717 19:19:35.054372 1084713 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:19:35.054383 1084713 fix.go:54] fixHost starting: 
	I0717 19:19:35.054817 1084713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:19:35.054852 1084713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:19:35.071155 1084713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0717 19:19:35.071679 1084713 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:19:35.072339 1084713 main.go:141] libmachine: Using API Version  1
	I0717 19:19:35.072365 1084713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:19:35.072858 1084713 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:19:35.073155 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:19:35.073499 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetState
	I0717 19:19:35.075555 1084713 fix.go:102] recreateIfNeeded on multinode-464644: state=Stopped err=<nil>
	I0717 19:19:35.075592 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	W0717 19:19:35.075804 1084713 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:19:35.078582 1084713 out.go:177] * Restarting existing kvm2 VM for "multinode-464644" ...
	I0717 19:19:35.080614 1084713 main.go:141] libmachine: (multinode-464644) Calling .Start
	I0717 19:19:35.080970 1084713 main.go:141] libmachine: (multinode-464644) Ensuring networks are active...
	I0717 19:19:35.082154 1084713 main.go:141] libmachine: (multinode-464644) Ensuring network default is active
	I0717 19:19:35.082609 1084713 main.go:141] libmachine: (multinode-464644) Ensuring network mk-multinode-464644 is active
	I0717 19:19:35.083106 1084713 main.go:141] libmachine: (multinode-464644) Getting domain xml...
	I0717 19:19:35.083886 1084713 main.go:141] libmachine: (multinode-464644) Creating domain...
	I0717 19:19:36.368275 1084713 main.go:141] libmachine: (multinode-464644) Waiting to get IP...
	I0717 19:19:36.369274 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:36.369772 1084713 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:19:36.369822 1084713 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:19:36.369732 1085943 retry.go:31] will retry after 192.906643ms: waiting for machine to come up
	I0717 19:19:36.564410 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:36.565028 1084713 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:19:36.565065 1084713 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:19:36.564957 1085943 retry.go:31] will retry after 259.661153ms: waiting for machine to come up
	I0717 19:19:36.826657 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:36.827299 1084713 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:19:36.827332 1084713 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:19:36.827268 1085943 retry.go:31] will retry after 336.692137ms: waiting for machine to come up
	I0717 19:19:37.166174 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:37.166874 1084713 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:19:37.166906 1084713 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:19:37.166800 1085943 retry.go:31] will retry after 478.528657ms: waiting for machine to come up
	I0717 19:19:37.646920 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:37.647549 1084713 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:19:37.647579 1084713 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:19:37.647470 1085943 retry.go:31] will retry after 537.955737ms: waiting for machine to come up
	I0717 19:19:38.187639 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:38.188302 1084713 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:19:38.188337 1084713 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:19:38.188247 1085943 retry.go:31] will retry after 601.638569ms: waiting for machine to come up
	I0717 19:19:38.792155 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:38.792612 1084713 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:19:38.792649 1084713 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:19:38.792547 1085943 retry.go:31] will retry after 916.325016ms: waiting for machine to come up
	I0717 19:19:39.710861 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:39.711372 1084713 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:19:39.711414 1084713 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:19:39.711308 1085943 retry.go:31] will retry after 1.216755731s: waiting for machine to come up
	I0717 19:19:40.929901 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:40.930448 1084713 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:19:40.930488 1084713 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:19:40.930369 1085943 retry.go:31] will retry after 1.66912358s: waiting for machine to come up
	I0717 19:19:42.602496 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:42.603049 1084713 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:19:42.603084 1084713 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:19:42.602973 1085943 retry.go:31] will retry after 1.712355865s: waiting for machine to come up
	I0717 19:19:44.316623 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:44.317198 1084713 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:19:44.317232 1084713 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:19:44.317135 1085943 retry.go:31] will retry after 2.655988834s: waiting for machine to come up
	I0717 19:19:46.974685 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:46.975114 1084713 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:19:46.975155 1084713 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:19:46.975108 1085943 retry.go:31] will retry after 2.682985975s: waiting for machine to come up
	I0717 19:19:49.661171 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:49.661689 1084713 main.go:141] libmachine: (multinode-464644) DBG | unable to find current IP address of domain multinode-464644 in network mk-multinode-464644
	I0717 19:19:49.661719 1084713 main.go:141] libmachine: (multinode-464644) DBG | I0717 19:19:49.661625 1085943 retry.go:31] will retry after 3.343821829s: waiting for machine to come up
	I0717 19:19:53.009133 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.009729 1084713 main.go:141] libmachine: (multinode-464644) Found IP for machine: 192.168.39.174
	I0717 19:19:53.009765 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has current primary IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.009776 1084713 main.go:141] libmachine: (multinode-464644) Reserving static IP address...
	I0717 19:19:53.010301 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "multinode-464644", mac: "52:54:00:7b:06:f6", ip: "192.168.39.174"} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:53.010327 1084713 main.go:141] libmachine: (multinode-464644) DBG | skip adding static IP to network mk-multinode-464644 - found existing host DHCP lease matching {name: "multinode-464644", mac: "52:54:00:7b:06:f6", ip: "192.168.39.174"}
	I0717 19:19:53.010340 1084713 main.go:141] libmachine: (multinode-464644) Reserved static IP address: 192.168.39.174
	I0717 19:19:53.010359 1084713 main.go:141] libmachine: (multinode-464644) Waiting for SSH to be available...
	I0717 19:19:53.010377 1084713 main.go:141] libmachine: (multinode-464644) DBG | Getting to WaitForSSH function...
	I0717 19:19:53.012643 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.013128 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:53.013169 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.013324 1084713 main.go:141] libmachine: (multinode-464644) DBG | Using SSH client type: external
	I0717 19:19:53.013374 1084713 main.go:141] libmachine: (multinode-464644) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa (-rw-------)
	I0717 19:19:53.013414 1084713 main.go:141] libmachine: (multinode-464644) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:19:53.013437 1084713 main.go:141] libmachine: (multinode-464644) DBG | About to run SSH command:
	I0717 19:19:53.013447 1084713 main.go:141] libmachine: (multinode-464644) DBG | exit 0
	I0717 19:19:53.106190 1084713 main.go:141] libmachine: (multinode-464644) DBG | SSH cmd err, output: <nil>: 
	I0717 19:19:53.106666 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetConfigRaw
	I0717 19:19:53.107439 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetIP
	I0717 19:19:53.110426 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.110902 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:53.110928 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.111292 1084713 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/config.json ...
	I0717 19:19:53.111595 1084713 machine.go:88] provisioning docker machine ...
	I0717 19:19:53.111618 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:19:53.111873 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetMachineName
	I0717 19:19:53.112096 1084713 buildroot.go:166] provisioning hostname "multinode-464644"
	I0717 19:19:53.112120 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetMachineName
	I0717 19:19:53.112358 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:19:53.114812 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.115281 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:53.115309 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.115457 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:19:53.115674 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:19:53.115846 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:19:53.116001 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:19:53.116173 1084713 main.go:141] libmachine: Using SSH client type: native
	I0717 19:19:53.116623 1084713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0717 19:19:53.116638 1084713 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-464644 && echo "multinode-464644" | sudo tee /etc/hostname
	I0717 19:19:53.254817 1084713 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-464644
	
	I0717 19:19:53.254858 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:19:53.258464 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.259109 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:53.259139 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.259361 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:19:53.259646 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:19:53.259829 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:19:53.260089 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:19:53.260344 1084713 main.go:141] libmachine: Using SSH client type: native
	I0717 19:19:53.260985 1084713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0717 19:19:53.261018 1084713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-464644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-464644/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-464644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:19:53.395979 1084713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:19:53.396051 1084713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:19:53.396101 1084713 buildroot.go:174] setting up certificates
	I0717 19:19:53.396140 1084713 provision.go:83] configureAuth start
	I0717 19:19:53.396175 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetMachineName
	I0717 19:19:53.396558 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetIP
	I0717 19:19:53.400303 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.400848 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:53.400894 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.401146 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:19:53.403946 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.404417 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:53.404461 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.404590 1084713 provision.go:138] copyHostCerts
	I0717 19:19:53.404623 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:19:53.404656 1084713 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:19:53.404665 1084713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:19:53.404733 1084713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:19:53.404821 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:19:53.404845 1084713 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:19:53.404853 1084713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:19:53.404878 1084713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:19:53.404918 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:19:53.404938 1084713 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:19:53.404944 1084713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:19:53.404963 1084713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:19:53.405010 1084713 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.multinode-464644 san=[192.168.39.174 192.168.39.174 localhost 127.0.0.1 minikube multinode-464644]
	I0717 19:19:53.477605 1084713 provision.go:172] copyRemoteCerts
	I0717 19:19:53.477730 1084713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:19:53.477781 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:19:53.480709 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.481198 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:53.481234 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.481622 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:19:53.481862 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:19:53.482066 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:19:53.482259 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:19:53.575373 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 19:19:53.575457 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:19:53.600904 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 19:19:53.601006 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 19:19:53.627059 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 19:19:53.627155 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:19:53.652634 1084713 provision.go:86] duration metric: configureAuth took 256.458213ms
	I0717 19:19:53.652674 1084713 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:19:53.652945 1084713 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:19:53.653034 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:19:53.656368 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.656906 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:53.656950 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:53.657155 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:19:53.657379 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:19:53.657629 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:19:53.657791 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:19:53.658000 1084713 main.go:141] libmachine: Using SSH client type: native
	I0717 19:19:53.658472 1084713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0717 19:19:53.658493 1084713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:19:53.998171 1084713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:19:53.998207 1084713 machine.go:91] provisioned docker machine in 886.595105ms
	I0717 19:19:53.998222 1084713 start.go:300] post-start starting for "multinode-464644" (driver="kvm2")
	I0717 19:19:53.998263 1084713 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:19:53.998303 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:19:53.998695 1084713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:19:53.998741 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:19:54.002234 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:54.002733 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:54.002766 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:54.003061 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:19:54.003310 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:19:54.003475 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:19:54.003683 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:19:54.095911 1084713 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:19:54.101090 1084713 command_runner.go:130] > NAME=Buildroot
	I0717 19:19:54.101129 1084713 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0717 19:19:54.101134 1084713 command_runner.go:130] > ID=buildroot
	I0717 19:19:54.101140 1084713 command_runner.go:130] > VERSION_ID=2021.02.12
	I0717 19:19:54.101144 1084713 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0717 19:19:54.101215 1084713 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:19:54.101228 1084713 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:19:54.101313 1084713 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:19:54.101394 1084713 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:19:54.101404 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> /etc/ssl/certs/10689542.pem
	I0717 19:19:54.101484 1084713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:19:54.110907 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:19:54.137542 1084713 start.go:303] post-start completed in 139.30091ms
	I0717 19:19:54.137587 1084713 fix.go:56] fixHost completed within 19.083204308s
	I0717 19:19:54.137648 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:19:54.140613 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:54.141018 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:54.141062 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:54.141392 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:19:54.141636 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:19:54.141824 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:19:54.141991 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:19:54.142181 1084713 main.go:141] libmachine: Using SSH client type: native
	I0717 19:19:54.142640 1084713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0717 19:19:54.142654 1084713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:19:54.267188 1084713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689621594.211735322
	
	I0717 19:19:54.267240 1084713 fix.go:206] guest clock: 1689621594.211735322
	I0717 19:19:54.267252 1084713 fix.go:219] Guest: 2023-07-17 19:19:54.211735322 +0000 UTC Remote: 2023-07-17 19:19:54.13759231 +0000 UTC m=+317.121450016 (delta=74.143012ms)
	I0717 19:19:54.267284 1084713 fix.go:190] guest clock delta is within tolerance: 74.143012ms
	I0717 19:19:54.267293 1084713 start.go:83] releasing machines lock for "multinode-464644", held for 19.212938524s
	I0717 19:19:54.267331 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:19:54.267715 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetIP
	I0717 19:19:54.270721 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:54.271234 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:54.271267 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:54.271547 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:19:54.272265 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:19:54.272554 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:19:54.272700 1084713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:19:54.272769 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:19:54.272844 1084713 ssh_runner.go:195] Run: cat /version.json
	I0717 19:19:54.272888 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:19:54.275992 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:54.276083 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:54.276517 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:54.276551 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:54.276568 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:54.276584 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:54.276723 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:19:54.276853 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:19:54.276951 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:19:54.277032 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:19:54.277184 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:19:54.277204 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:19:54.277403 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:19:54.277402 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:19:54.393025 1084713 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 19:19:54.394076 1084713 command_runner.go:130] > {"iso_version": "v1.31.0", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "be0194f682c2c37366eacb8c13503cb6c7a41cf8"}
	W0717 19:19:54.394230 1084713 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:19:54.394374 1084713 ssh_runner.go:195] Run: systemctl --version
	I0717 19:19:54.400547 1084713 command_runner.go:130] > systemd 247 (247)
	I0717 19:19:54.400594 1084713 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0717 19:19:54.400870 1084713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:19:54.549756 1084713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:19:54.556167 1084713 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 19:19:54.556233 1084713 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:19:54.556298 1084713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:19:54.572258 1084713 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0717 19:19:54.572340 1084713 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:19:54.572352 1084713 start.go:469] detecting cgroup driver to use...
	I0717 19:19:54.572512 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:19:54.588341 1084713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:19:54.603113 1084713 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:19:54.603237 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:19:54.620345 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:19:54.637663 1084713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:19:54.744236 1084713 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0717 19:19:54.745168 1084713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:19:54.761880 1084713 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0717 19:19:54.864787 1084713 docker.go:212] disabling docker service ...
	I0717 19:19:54.864870 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:19:54.880028 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:19:54.892978 1084713 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0717 19:19:54.893668 1084713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:19:54.910181 1084713 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0717 19:19:55.003686 1084713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:19:55.016976 1084713 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0717 19:19:55.017013 1084713 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0717 19:19:55.104923 1084713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:19:55.118623 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:19:55.138222 1084713 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 19:19:55.138277 1084713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:19:55.138343 1084713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:19:55.149302 1084713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:19:55.149397 1084713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:19:55.160052 1084713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:19:55.171976 1084713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:19:55.183052 1084713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:19:55.195045 1084713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:19:55.205726 1084713 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:19:55.205787 1084713 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:19:55.205858 1084713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:19:55.220931 1084713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:19:55.231581 1084713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:19:55.335545 1084713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:19:55.536084 1084713 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:19:55.536179 1084713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:19:55.542051 1084713 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 19:19:55.542080 1084713 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 19:19:55.542087 1084713 command_runner.go:130] > Device: 16h/22d	Inode: 773         Links: 1
	I0717 19:19:55.542100 1084713 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:19:55.542105 1084713 command_runner.go:130] > Access: 2023-07-17 19:19:55.466331536 +0000
	I0717 19:19:55.542111 1084713 command_runner.go:130] > Modify: 2023-07-17 19:19:55.466331536 +0000
	I0717 19:19:55.542117 1084713 command_runner.go:130] > Change: 2023-07-17 19:19:55.466331536 +0000
	I0717 19:19:55.542121 1084713 command_runner.go:130] >  Birth: -
	I0717 19:19:55.542141 1084713 start.go:537] Will wait 60s for crictl version
	I0717 19:19:55.542189 1084713 ssh_runner.go:195] Run: which crictl
	I0717 19:19:55.546481 1084713 command_runner.go:130] > /usr/bin/crictl
	I0717 19:19:55.546579 1084713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:19:55.581930 1084713 command_runner.go:130] > Version:  0.1.0
	I0717 19:19:55.581960 1084713 command_runner.go:130] > RuntimeName:  cri-o
	I0717 19:19:55.581984 1084713 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0717 19:19:55.581990 1084713 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0717 19:19:55.583349 1084713 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:19:55.583440 1084713 ssh_runner.go:195] Run: crio --version
	I0717 19:19:55.634880 1084713 command_runner.go:130] > crio version 1.24.1
	I0717 19:19:55.634907 1084713 command_runner.go:130] > Version:          1.24.1
	I0717 19:19:55.634915 1084713 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 19:19:55.634919 1084713 command_runner.go:130] > GitTreeState:     dirty
	I0717 19:19:55.634937 1084713 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 19:19:55.634942 1084713 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 19:19:55.634953 1084713 command_runner.go:130] > Compiler:         gc
	I0717 19:19:55.634957 1084713 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:19:55.634963 1084713 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:19:55.634969 1084713 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:19:55.634973 1084713 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:19:55.634978 1084713 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:19:55.636248 1084713 ssh_runner.go:195] Run: crio --version
	I0717 19:19:55.687009 1084713 command_runner.go:130] > crio version 1.24.1
	I0717 19:19:55.687034 1084713 command_runner.go:130] > Version:          1.24.1
	I0717 19:19:55.687041 1084713 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 19:19:55.687045 1084713 command_runner.go:130] > GitTreeState:     dirty
	I0717 19:19:55.687051 1084713 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 19:19:55.687056 1084713 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 19:19:55.687060 1084713 command_runner.go:130] > Compiler:         gc
	I0717 19:19:55.687064 1084713 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:19:55.687070 1084713 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:19:55.687077 1084713 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:19:55.687081 1084713 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:19:55.687086 1084713 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:19:55.690887 1084713 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:19:55.692981 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetIP
	I0717 19:19:55.696004 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:55.696770 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:19:55.696819 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:19:55.697239 1084713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:19:55.701924 1084713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:19:55.714489 1084713 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:19:55.714606 1084713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:19:55.747731 1084713 command_runner.go:130] > {
	I0717 19:19:55.747758 1084713 command_runner.go:130] >   "images": [
	I0717 19:19:55.747762 1084713 command_runner.go:130] >     {
	I0717 19:19:55.747770 1084713 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 19:19:55.747775 1084713 command_runner.go:130] >       "repoTags": [
	I0717 19:19:55.747781 1084713 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 19:19:55.747785 1084713 command_runner.go:130] >       ],
	I0717 19:19:55.747789 1084713 command_runner.go:130] >       "repoDigests": [
	I0717 19:19:55.747799 1084713 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 19:19:55.747806 1084713 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 19:19:55.747810 1084713 command_runner.go:130] >       ],
	I0717 19:19:55.747814 1084713 command_runner.go:130] >       "size": "750414",
	I0717 19:19:55.747819 1084713 command_runner.go:130] >       "uid": {
	I0717 19:19:55.747827 1084713 command_runner.go:130] >         "value": "65535"
	I0717 19:19:55.747831 1084713 command_runner.go:130] >       },
	I0717 19:19:55.747835 1084713 command_runner.go:130] >       "username": "",
	I0717 19:19:55.747840 1084713 command_runner.go:130] >       "spec": null
	I0717 19:19:55.747843 1084713 command_runner.go:130] >     }
	I0717 19:19:55.747847 1084713 command_runner.go:130] >   ]
	I0717 19:19:55.747851 1084713 command_runner.go:130] > }
	I0717 19:19:55.748002 1084713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:19:55.748070 1084713 ssh_runner.go:195] Run: which lz4
	I0717 19:19:55.752327 1084713 command_runner.go:130] > /usr/bin/lz4
	I0717 19:19:55.752457 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 19:19:55.752565 1084713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:19:55.757157 1084713 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:19:55.757207 1084713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:19:55.757245 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 19:19:57.563941 1084713 crio.go:444] Took 1.811398 seconds to copy over tarball
	I0717 19:19:57.564037 1084713 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:20:00.456375 1084713 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.892299536s)
	I0717 19:20:00.456419 1084713 crio.go:451] Took 2.892437 seconds to extract the tarball
	I0717 19:20:00.456431 1084713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:20:00.499918 1084713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:20:00.543120 1084713 command_runner.go:130] > {
	I0717 19:20:00.543148 1084713 command_runner.go:130] >   "images": [
	I0717 19:20:00.543155 1084713 command_runner.go:130] >     {
	I0717 19:20:00.543174 1084713 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0717 19:20:00.543182 1084713 command_runner.go:130] >       "repoTags": [
	I0717 19:20:00.543192 1084713 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0717 19:20:00.543198 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.543204 1084713 command_runner.go:130] >       "repoDigests": [
	I0717 19:20:00.543212 1084713 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0717 19:20:00.543223 1084713 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0717 19:20:00.543226 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.543230 1084713 command_runner.go:130] >       "size": "65249302",
	I0717 19:20:00.543235 1084713 command_runner.go:130] >       "uid": null,
	I0717 19:20:00.543239 1084713 command_runner.go:130] >       "username": "",
	I0717 19:20:00.543250 1084713 command_runner.go:130] >       "spec": null
	I0717 19:20:00.543260 1084713 command_runner.go:130] >     },
	I0717 19:20:00.543266 1084713 command_runner.go:130] >     {
	I0717 19:20:00.543277 1084713 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 19:20:00.543292 1084713 command_runner.go:130] >       "repoTags": [
	I0717 19:20:00.543301 1084713 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 19:20:00.543310 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.543317 1084713 command_runner.go:130] >       "repoDigests": [
	I0717 19:20:00.543325 1084713 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 19:20:00.543336 1084713 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 19:20:00.543345 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.543352 1084713 command_runner.go:130] >       "size": "31470524",
	I0717 19:20:00.543360 1084713 command_runner.go:130] >       "uid": null,
	I0717 19:20:00.543373 1084713 command_runner.go:130] >       "username": "",
	I0717 19:20:00.543384 1084713 command_runner.go:130] >       "spec": null
	I0717 19:20:00.543390 1084713 command_runner.go:130] >     },
	I0717 19:20:00.543402 1084713 command_runner.go:130] >     {
	I0717 19:20:00.543415 1084713 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0717 19:20:00.543422 1084713 command_runner.go:130] >       "repoTags": [
	I0717 19:20:00.543431 1084713 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0717 19:20:00.543441 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.543449 1084713 command_runner.go:130] >       "repoDigests": [
	I0717 19:20:00.543465 1084713 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0717 19:20:00.543480 1084713 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0717 19:20:00.543489 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.543500 1084713 command_runner.go:130] >       "size": "53621675",
	I0717 19:20:00.543508 1084713 command_runner.go:130] >       "uid": null,
	I0717 19:20:00.543513 1084713 command_runner.go:130] >       "username": "",
	I0717 19:20:00.543523 1084713 command_runner.go:130] >       "spec": null
	I0717 19:20:00.543533 1084713 command_runner.go:130] >     },
	I0717 19:20:00.543539 1084713 command_runner.go:130] >     {
	I0717 19:20:00.543553 1084713 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0717 19:20:00.543562 1084713 command_runner.go:130] >       "repoTags": [
	I0717 19:20:00.543573 1084713 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0717 19:20:00.543582 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.543592 1084713 command_runner.go:130] >       "repoDigests": [
	I0717 19:20:00.543604 1084713 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0717 19:20:00.543620 1084713 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0717 19:20:00.543630 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.543637 1084713 command_runner.go:130] >       "size": "297083935",
	I0717 19:20:00.543646 1084713 command_runner.go:130] >       "uid": {
	I0717 19:20:00.543655 1084713 command_runner.go:130] >         "value": "0"
	I0717 19:20:00.543671 1084713 command_runner.go:130] >       },
	I0717 19:20:00.543682 1084713 command_runner.go:130] >       "username": "",
	I0717 19:20:00.543691 1084713 command_runner.go:130] >       "spec": null
	I0717 19:20:00.543697 1084713 command_runner.go:130] >     },
	I0717 19:20:00.543706 1084713 command_runner.go:130] >     {
	I0717 19:20:00.543721 1084713 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0717 19:20:00.543731 1084713 command_runner.go:130] >       "repoTags": [
	I0717 19:20:00.543743 1084713 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0717 19:20:00.543752 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.543759 1084713 command_runner.go:130] >       "repoDigests": [
	I0717 19:20:00.543770 1084713 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0717 19:20:00.543786 1084713 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0717 19:20:00.543793 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.543800 1084713 command_runner.go:130] >       "size": "122065872",
	I0717 19:20:00.543810 1084713 command_runner.go:130] >       "uid": {
	I0717 19:20:00.543817 1084713 command_runner.go:130] >         "value": "0"
	I0717 19:20:00.543825 1084713 command_runner.go:130] >       },
	I0717 19:20:00.543832 1084713 command_runner.go:130] >       "username": "",
	I0717 19:20:00.543842 1084713 command_runner.go:130] >       "spec": null
	I0717 19:20:00.543852 1084713 command_runner.go:130] >     },
	I0717 19:20:00.543859 1084713 command_runner.go:130] >     {
	I0717 19:20:00.543866 1084713 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0717 19:20:00.543875 1084713 command_runner.go:130] >       "repoTags": [
	I0717 19:20:00.543888 1084713 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0717 19:20:00.543897 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.543908 1084713 command_runner.go:130] >       "repoDigests": [
	I0717 19:20:00.543923 1084713 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0717 19:20:00.543940 1084713 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0717 19:20:00.543947 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.543951 1084713 command_runner.go:130] >       "size": "113919286",
	I0717 19:20:00.543958 1084713 command_runner.go:130] >       "uid": {
	I0717 19:20:00.543968 1084713 command_runner.go:130] >         "value": "0"
	I0717 19:20:00.543978 1084713 command_runner.go:130] >       },
	I0717 19:20:00.543985 1084713 command_runner.go:130] >       "username": "",
	I0717 19:20:00.543999 1084713 command_runner.go:130] >       "spec": null
	I0717 19:20:00.544007 1084713 command_runner.go:130] >     },
	I0717 19:20:00.544014 1084713 command_runner.go:130] >     {
	I0717 19:20:00.544030 1084713 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0717 19:20:00.544037 1084713 command_runner.go:130] >       "repoTags": [
	I0717 19:20:00.544045 1084713 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0717 19:20:00.544054 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.544065 1084713 command_runner.go:130] >       "repoDigests": [
	I0717 19:20:00.544080 1084713 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0717 19:20:00.544096 1084713 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0717 19:20:00.544105 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.544112 1084713 command_runner.go:130] >       "size": "72713623",
	I0717 19:20:00.544121 1084713 command_runner.go:130] >       "uid": null,
	I0717 19:20:00.544125 1084713 command_runner.go:130] >       "username": "",
	I0717 19:20:00.544134 1084713 command_runner.go:130] >       "spec": null
	I0717 19:20:00.544143 1084713 command_runner.go:130] >     },
	I0717 19:20:00.544153 1084713 command_runner.go:130] >     {
	I0717 19:20:00.544167 1084713 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0717 19:20:00.544177 1084713 command_runner.go:130] >       "repoTags": [
	I0717 19:20:00.544188 1084713 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0717 19:20:00.544197 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.544211 1084713 command_runner.go:130] >       "repoDigests": [
	I0717 19:20:00.544226 1084713 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0717 19:20:00.544266 1084713 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0717 19:20:00.544278 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.544285 1084713 command_runner.go:130] >       "size": "59811126",
	I0717 19:20:00.544291 1084713 command_runner.go:130] >       "uid": {
	I0717 19:20:00.544297 1084713 command_runner.go:130] >         "value": "0"
	I0717 19:20:00.544301 1084713 command_runner.go:130] >       },
	I0717 19:20:00.544309 1084713 command_runner.go:130] >       "username": "",
	I0717 19:20:00.544318 1084713 command_runner.go:130] >       "spec": null
	I0717 19:20:00.544328 1084713 command_runner.go:130] >     },
	I0717 19:20:00.544337 1084713 command_runner.go:130] >     {
	I0717 19:20:00.544352 1084713 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 19:20:00.544362 1084713 command_runner.go:130] >       "repoTags": [
	I0717 19:20:00.544374 1084713 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 19:20:00.544382 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.544387 1084713 command_runner.go:130] >       "repoDigests": [
	I0717 19:20:00.544396 1084713 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 19:20:00.544414 1084713 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 19:20:00.544424 1084713 command_runner.go:130] >       ],
	I0717 19:20:00.544434 1084713 command_runner.go:130] >       "size": "750414",
	I0717 19:20:00.544444 1084713 command_runner.go:130] >       "uid": {
	I0717 19:20:00.544452 1084713 command_runner.go:130] >         "value": "65535"
	I0717 19:20:00.544461 1084713 command_runner.go:130] >       },
	I0717 19:20:00.544471 1084713 command_runner.go:130] >       "username": "",
	I0717 19:20:00.544475 1084713 command_runner.go:130] >       "spec": null
	I0717 19:20:00.544479 1084713 command_runner.go:130] >     }
	I0717 19:20:00.544485 1084713 command_runner.go:130] >   ]
	I0717 19:20:00.544494 1084713 command_runner.go:130] > }
	I0717 19:20:00.544658 1084713 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:20:00.544672 1084713 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:20:00.544759 1084713 ssh_runner.go:195] Run: crio config
	I0717 19:20:00.619194 1084713 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 19:20:00.619224 1084713 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 19:20:00.619231 1084713 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 19:20:00.619235 1084713 command_runner.go:130] > #
	I0717 19:20:00.619264 1084713 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 19:20:00.619275 1084713 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 19:20:00.619289 1084713 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 19:20:00.619301 1084713 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 19:20:00.619311 1084713 command_runner.go:130] > # reload'.
	I0717 19:20:00.619320 1084713 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 19:20:00.619329 1084713 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 19:20:00.619335 1084713 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 19:20:00.619344 1084713 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 19:20:00.619347 1084713 command_runner.go:130] > [crio]
	I0717 19:20:00.619359 1084713 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 19:20:00.619370 1084713 command_runner.go:130] > # containers images, in this directory.
	I0717 19:20:00.619382 1084713 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 19:20:00.619401 1084713 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 19:20:00.619413 1084713 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 19:20:00.619425 1084713 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 19:20:00.619433 1084713 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 19:20:00.619439 1084713 command_runner.go:130] > storage_driver = "overlay"
	I0717 19:20:00.619450 1084713 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 19:20:00.619465 1084713 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 19:20:00.619471 1084713 command_runner.go:130] > storage_option = [
	I0717 19:20:00.619477 1084713 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 19:20:00.619482 1084713 command_runner.go:130] > ]
	I0717 19:20:00.619493 1084713 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 19:20:00.619503 1084713 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 19:20:00.619515 1084713 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 19:20:00.619524 1084713 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 19:20:00.619536 1084713 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 19:20:00.619545 1084713 command_runner.go:130] > # always happen on a node reboot
	I0717 19:20:00.619554 1084713 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 19:20:00.619567 1084713 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 19:20:00.619580 1084713 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 19:20:00.619604 1084713 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 19:20:00.619616 1084713 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 19:20:00.619630 1084713 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 19:20:00.619647 1084713 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 19:20:00.619663 1084713 command_runner.go:130] > # internal_wipe = true
	I0717 19:20:00.619685 1084713 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 19:20:00.619700 1084713 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 19:20:00.619713 1084713 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 19:20:00.619726 1084713 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 19:20:00.619737 1084713 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 19:20:00.619746 1084713 command_runner.go:130] > [crio.api]
	I0717 19:20:00.619756 1084713 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 19:20:00.619768 1084713 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 19:20:00.619781 1084713 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 19:20:00.619792 1084713 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 19:20:00.619807 1084713 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 19:20:00.619820 1084713 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 19:20:00.619830 1084713 command_runner.go:130] > # stream_port = "0"
	I0717 19:20:00.619843 1084713 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 19:20:00.619853 1084713 command_runner.go:130] > # stream_enable_tls = false
	I0717 19:20:00.619867 1084713 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 19:20:00.619878 1084713 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 19:20:00.619896 1084713 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 19:20:00.619910 1084713 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 19:20:00.619919 1084713 command_runner.go:130] > # minutes.
	I0717 19:20:00.619926 1084713 command_runner.go:130] > # stream_tls_cert = ""
	I0717 19:20:00.619941 1084713 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 19:20:00.619955 1084713 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 19:20:00.619966 1084713 command_runner.go:130] > # stream_tls_key = ""
	I0717 19:20:00.619977 1084713 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 19:20:00.619992 1084713 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 19:20:00.620005 1084713 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 19:20:00.620115 1084713 command_runner.go:130] > # stream_tls_ca = ""
	I0717 19:20:00.620132 1084713 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:20:00.620140 1084713 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 19:20:00.620156 1084713 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:20:00.620166 1084713 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 19:20:00.620199 1084713 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 19:20:00.620213 1084713 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 19:20:00.620222 1084713 command_runner.go:130] > [crio.runtime]
	I0717 19:20:00.620236 1084713 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 19:20:00.620247 1084713 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 19:20:00.620253 1084713 command_runner.go:130] > # "nofile=1024:2048"
	I0717 19:20:00.620259 1084713 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 19:20:00.620266 1084713 command_runner.go:130] > # default_ulimits = [
	I0717 19:20:00.620270 1084713 command_runner.go:130] > # ]
	I0717 19:20:00.620276 1084713 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 19:20:00.620282 1084713 command_runner.go:130] > # no_pivot = false
	I0717 19:20:00.620290 1084713 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 19:20:00.620300 1084713 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 19:20:00.620311 1084713 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 19:20:00.620321 1084713 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 19:20:00.620329 1084713 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 19:20:00.620339 1084713 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:20:00.620350 1084713 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 19:20:00.620358 1084713 command_runner.go:130] > # Cgroup setting for conmon
	I0717 19:20:00.620371 1084713 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 19:20:00.620381 1084713 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 19:20:00.620399 1084713 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 19:20:00.620411 1084713 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 19:20:00.620421 1084713 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:20:00.620431 1084713 command_runner.go:130] > conmon_env = [
	I0717 19:20:00.620473 1084713 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 19:20:00.620492 1084713 command_runner.go:130] > ]
	I0717 19:20:00.620502 1084713 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 19:20:00.620514 1084713 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 19:20:00.620525 1084713 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 19:20:00.620535 1084713 command_runner.go:130] > # default_env = [
	I0717 19:20:00.620543 1084713 command_runner.go:130] > # ]
	I0717 19:20:00.620556 1084713 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 19:20:00.620564 1084713 command_runner.go:130] > # selinux = false
	I0717 19:20:00.620576 1084713 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 19:20:00.620590 1084713 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 19:20:00.620604 1084713 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 19:20:00.620614 1084713 command_runner.go:130] > # seccomp_profile = ""
	I0717 19:20:00.620626 1084713 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 19:20:00.620646 1084713 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 19:20:00.620660 1084713 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 19:20:00.620670 1084713 command_runner.go:130] > # which might increase security.
	I0717 19:20:00.620680 1084713 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 19:20:00.620698 1084713 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 19:20:00.620712 1084713 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 19:20:00.620725 1084713 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 19:20:00.620740 1084713 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 19:20:00.620752 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:20:00.620762 1084713 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 19:20:00.620772 1084713 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 19:20:00.620782 1084713 command_runner.go:130] > # the cgroup blockio controller.
	I0717 19:20:00.620792 1084713 command_runner.go:130] > # blockio_config_file = ""
	I0717 19:20:00.620806 1084713 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 19:20:00.620816 1084713 command_runner.go:130] > # irqbalance daemon.
	I0717 19:20:00.620828 1084713 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 19:20:00.620838 1084713 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 19:20:00.620850 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:20:00.620864 1084713 command_runner.go:130] > # rdt_config_file = ""
	I0717 19:20:00.620876 1084713 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 19:20:00.620886 1084713 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 19:20:00.620898 1084713 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 19:20:00.620908 1084713 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 19:20:00.620922 1084713 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 19:20:00.620936 1084713 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 19:20:00.620945 1084713 command_runner.go:130] > # will be added.
	I0717 19:20:00.620953 1084713 command_runner.go:130] > # default_capabilities = [
	I0717 19:20:00.620962 1084713 command_runner.go:130] > # 	"CHOWN",
	I0717 19:20:00.620972 1084713 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 19:20:00.620981 1084713 command_runner.go:130] > # 	"FSETID",
	I0717 19:20:00.620988 1084713 command_runner.go:130] > # 	"FOWNER",
	I0717 19:20:00.620997 1084713 command_runner.go:130] > # 	"SETGID",
	I0717 19:20:00.621004 1084713 command_runner.go:130] > # 	"SETUID",
	I0717 19:20:00.621017 1084713 command_runner.go:130] > # 	"SETPCAP",
	I0717 19:20:00.621024 1084713 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 19:20:00.621033 1084713 command_runner.go:130] > # 	"KILL",
	I0717 19:20:00.621043 1084713 command_runner.go:130] > # ]
	I0717 19:20:00.621058 1084713 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 19:20:00.621071 1084713 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:20:00.621081 1084713 command_runner.go:130] > # default_sysctls = [
	I0717 19:20:00.621087 1084713 command_runner.go:130] > # ]
	I0717 19:20:00.621098 1084713 command_runner.go:130] > # List of devices on the host that a
	I0717 19:20:00.621147 1084713 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 19:20:00.621158 1084713 command_runner.go:130] > # allowed_devices = [
	I0717 19:20:00.621165 1084713 command_runner.go:130] > # 	"/dev/fuse",
	I0717 19:20:00.621170 1084713 command_runner.go:130] > # ]
	I0717 19:20:00.621183 1084713 command_runner.go:130] > # List of additional devices. specified as
	I0717 19:20:00.621193 1084713 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 19:20:00.621203 1084713 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 19:20:00.621248 1084713 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:20:00.621260 1084713 command_runner.go:130] > # additional_devices = [
	I0717 19:20:00.621265 1084713 command_runner.go:130] > # ]
	I0717 19:20:00.621273 1084713 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 19:20:00.621284 1084713 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 19:20:00.621293 1084713 command_runner.go:130] > # 	"/etc/cdi",
	I0717 19:20:00.621303 1084713 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 19:20:00.621311 1084713 command_runner.go:130] > # ]
	I0717 19:20:00.621325 1084713 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 19:20:00.621337 1084713 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 19:20:00.621347 1084713 command_runner.go:130] > # Defaults to false.
	I0717 19:20:00.621354 1084713 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 19:20:00.621368 1084713 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 19:20:00.621381 1084713 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 19:20:00.621391 1084713 command_runner.go:130] > # hooks_dir = [
	I0717 19:20:00.621398 1084713 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 19:20:00.621408 1084713 command_runner.go:130] > # ]
	I0717 19:20:00.621418 1084713 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 19:20:00.621431 1084713 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 19:20:00.621443 1084713 command_runner.go:130] > # its default mounts from the following two files:
	I0717 19:20:00.621449 1084713 command_runner.go:130] > #
	I0717 19:20:00.621460 1084713 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 19:20:00.621474 1084713 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 19:20:00.621492 1084713 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 19:20:00.621501 1084713 command_runner.go:130] > #
	I0717 19:20:00.621511 1084713 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 19:20:00.621526 1084713 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 19:20:00.621540 1084713 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 19:20:00.621551 1084713 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 19:20:00.621568 1084713 command_runner.go:130] > #
	I0717 19:20:00.621577 1084713 command_runner.go:130] > # default_mounts_file = ""
	I0717 19:20:00.621591 1084713 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 19:20:00.621605 1084713 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 19:20:00.621615 1084713 command_runner.go:130] > pids_limit = 1024
	I0717 19:20:00.621627 1084713 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 19:20:00.621637 1084713 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 19:20:00.621645 1084713 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 19:20:00.621653 1084713 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 19:20:00.621660 1084713 command_runner.go:130] > # log_size_max = -1
	I0717 19:20:00.621666 1084713 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 19:20:00.621672 1084713 command_runner.go:130] > # log_to_journald = false
	I0717 19:20:00.621681 1084713 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 19:20:00.621691 1084713 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 19:20:00.621701 1084713 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 19:20:00.621709 1084713 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 19:20:00.621721 1084713 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 19:20:00.621732 1084713 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 19:20:00.621744 1084713 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 19:20:00.621754 1084713 command_runner.go:130] > # read_only = false
	I0717 19:20:00.621764 1084713 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 19:20:00.621778 1084713 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 19:20:00.621788 1084713 command_runner.go:130] > # live configuration reload.
	I0717 19:20:00.621797 1084713 command_runner.go:130] > # log_level = "info"
	I0717 19:20:00.621807 1084713 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 19:20:00.621818 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:20:00.621827 1084713 command_runner.go:130] > # log_filter = ""
	I0717 19:20:00.621834 1084713 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 19:20:00.621842 1084713 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 19:20:00.621846 1084713 command_runner.go:130] > # separated by comma.
	I0717 19:20:00.621856 1084713 command_runner.go:130] > # uid_mappings = ""
	I0717 19:20:00.621869 1084713 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 19:20:00.621884 1084713 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 19:20:00.621894 1084713 command_runner.go:130] > # separated by comma.
	I0717 19:20:00.621904 1084713 command_runner.go:130] > # gid_mappings = ""
	I0717 19:20:00.621915 1084713 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 19:20:00.621928 1084713 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:20:00.621939 1084713 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:20:00.621945 1084713 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 19:20:00.621953 1084713 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 19:20:00.621959 1084713 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:20:00.621972 1084713 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:20:00.621982 1084713 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 19:20:00.621996 1084713 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 19:20:00.622013 1084713 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 19:20:00.622053 1084713 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 19:20:00.622064 1084713 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 19:20:00.622074 1084713 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 19:20:00.622090 1084713 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 19:20:00.622102 1084713 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 19:20:00.622111 1084713 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 19:20:00.622118 1084713 command_runner.go:130] > drop_infra_ctr = false
	I0717 19:20:00.622125 1084713 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 19:20:00.622133 1084713 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 19:20:00.622144 1084713 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 19:20:00.622151 1084713 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 19:20:00.622156 1084713 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 19:20:00.622167 1084713 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 19:20:00.622173 1084713 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 19:20:00.622182 1084713 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 19:20:00.622188 1084713 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 19:20:00.622194 1084713 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 19:20:00.622205 1084713 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 19:20:00.622218 1084713 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 19:20:00.622229 1084713 command_runner.go:130] > # default_runtime = "runc"
	I0717 19:20:00.622242 1084713 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 19:20:00.622263 1084713 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 19:20:00.622282 1084713 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 19:20:00.622294 1084713 command_runner.go:130] > # creation as a file is not desired either.
	I0717 19:20:00.622311 1084713 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 19:20:00.622326 1084713 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 19:20:00.622337 1084713 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 19:20:00.622343 1084713 command_runner.go:130] > # ]
	I0717 19:20:00.622350 1084713 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 19:20:00.622362 1084713 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 19:20:00.622375 1084713 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 19:20:00.622389 1084713 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 19:20:00.622397 1084713 command_runner.go:130] > #
	I0717 19:20:00.622409 1084713 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 19:20:00.622421 1084713 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 19:20:00.622429 1084713 command_runner.go:130] > #  runtime_type = "oci"
	I0717 19:20:00.622440 1084713 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 19:20:00.622448 1084713 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 19:20:00.622458 1084713 command_runner.go:130] > #  allowed_annotations = []
	I0717 19:20:00.622469 1084713 command_runner.go:130] > # Where:
	I0717 19:20:00.622477 1084713 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 19:20:00.622483 1084713 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 19:20:00.622491 1084713 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 19:20:00.622499 1084713 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 19:20:00.622504 1084713 command_runner.go:130] > #   in $PATH.
	I0717 19:20:00.622512 1084713 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 19:20:00.622524 1084713 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 19:20:00.622536 1084713 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 19:20:00.622546 1084713 command_runner.go:130] > #   state.
	I0717 19:20:00.622560 1084713 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 19:20:00.622573 1084713 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 19:20:00.622583 1084713 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 19:20:00.622596 1084713 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 19:20:00.622609 1084713 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 19:20:00.622624 1084713 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 19:20:00.622634 1084713 command_runner.go:130] > #   The currently recognized values are:
	I0717 19:20:00.622648 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 19:20:00.622666 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 19:20:00.622679 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 19:20:00.622692 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 19:20:00.622712 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 19:20:00.622728 1084713 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 19:20:00.622743 1084713 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 19:20:00.622754 1084713 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 19:20:00.622766 1084713 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 19:20:00.622776 1084713 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 19:20:00.622787 1084713 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 19:20:00.622796 1084713 command_runner.go:130] > runtime_type = "oci"
	I0717 19:20:00.622828 1084713 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 19:20:00.622839 1084713 command_runner.go:130] > runtime_config_path = ""
	I0717 19:20:00.622849 1084713 command_runner.go:130] > monitor_path = ""
	I0717 19:20:00.622856 1084713 command_runner.go:130] > monitor_cgroup = ""
	I0717 19:20:00.622866 1084713 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 19:20:00.622879 1084713 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 19:20:00.622888 1084713 command_runner.go:130] > # running containers
	I0717 19:20:00.622903 1084713 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 19:20:00.622920 1084713 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 19:20:00.622990 1084713 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 19:20:00.623003 1084713 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 19:20:00.623019 1084713 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 19:20:00.623028 1084713 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 19:20:00.623039 1084713 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 19:20:00.623050 1084713 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 19:20:00.623061 1084713 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 19:20:00.623072 1084713 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 19:20:00.623082 1084713 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 19:20:00.623091 1084713 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 19:20:00.623100 1084713 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 19:20:00.623116 1084713 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 19:20:00.623133 1084713 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 19:20:00.623147 1084713 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 19:20:00.623165 1084713 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 19:20:00.623181 1084713 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 19:20:00.623197 1084713 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 19:20:00.623210 1084713 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 19:20:00.623220 1084713 command_runner.go:130] > # Example:
	I0717 19:20:00.623231 1084713 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 19:20:00.623245 1084713 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 19:20:00.623257 1084713 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 19:20:00.623268 1084713 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 19:20:00.623277 1084713 command_runner.go:130] > # cpuset = 0
	I0717 19:20:00.623285 1084713 command_runner.go:130] > # cpushares = "0-1"
	I0717 19:20:00.623289 1084713 command_runner.go:130] > # Where:
	I0717 19:20:00.623299 1084713 command_runner.go:130] > # The workload name is workload-type.
	I0717 19:20:00.623315 1084713 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 19:20:00.623328 1084713 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 19:20:00.623340 1084713 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 19:20:00.623356 1084713 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 19:20:00.623368 1084713 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 19:20:00.623375 1084713 command_runner.go:130] > # 
	I0717 19:20:00.623384 1084713 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 19:20:00.623396 1084713 command_runner.go:130] > #
	I0717 19:20:00.623410 1084713 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 19:20:00.623424 1084713 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 19:20:00.623438 1084713 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 19:20:00.623451 1084713 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 19:20:00.623463 1084713 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 19:20:00.623472 1084713 command_runner.go:130] > [crio.image]
	I0717 19:20:00.623481 1084713 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 19:20:00.623491 1084713 command_runner.go:130] > # default_transport = "docker://"
	I0717 19:20:00.623505 1084713 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 19:20:00.623520 1084713 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:20:00.623531 1084713 command_runner.go:130] > # global_auth_file = ""
	I0717 19:20:00.623542 1084713 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 19:20:00.623553 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:20:00.623564 1084713 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 19:20:00.623576 1084713 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 19:20:00.623588 1084713 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:20:00.623600 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:20:00.623617 1084713 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 19:20:00.623630 1084713 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 19:20:00.623643 1084713 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 19:20:00.623656 1084713 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 19:20:00.623666 1084713 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 19:20:00.623678 1084713 command_runner.go:130] > # pause_command = "/pause"
	I0717 19:20:00.623692 1084713 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 19:20:00.623705 1084713 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 19:20:00.623724 1084713 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 19:20:00.623766 1084713 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 19:20:00.623775 1084713 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 19:20:00.623780 1084713 command_runner.go:130] > # signature_policy = ""
	I0717 19:20:00.623792 1084713 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 19:20:00.623806 1084713 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 19:20:00.623817 1084713 command_runner.go:130] > # changing them here.
	I0717 19:20:00.623827 1084713 command_runner.go:130] > # insecure_registries = [
	I0717 19:20:00.623832 1084713 command_runner.go:130] > # ]
	I0717 19:20:00.623843 1084713 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 19:20:00.623854 1084713 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 19:20:00.623861 1084713 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 19:20:00.623870 1084713 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 19:20:00.623874 1084713 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 19:20:00.623881 1084713 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 19:20:00.623887 1084713 command_runner.go:130] > # CNI plugins.
	I0717 19:20:00.623894 1084713 command_runner.go:130] > [crio.network]
	I0717 19:20:00.623904 1084713 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 19:20:00.623913 1084713 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 19:20:00.623920 1084713 command_runner.go:130] > # cni_default_network = ""
	I0717 19:20:00.623929 1084713 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 19:20:00.623937 1084713 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 19:20:00.623946 1084713 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 19:20:00.623952 1084713 command_runner.go:130] > # plugin_dirs = [
	I0717 19:20:00.623958 1084713 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 19:20:00.623962 1084713 command_runner.go:130] > # ]
	I0717 19:20:00.623968 1084713 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 19:20:00.623973 1084713 command_runner.go:130] > [crio.metrics]
	I0717 19:20:00.623988 1084713 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 19:20:00.623995 1084713 command_runner.go:130] > enable_metrics = true
	I0717 19:20:00.624003 1084713 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 19:20:00.624016 1084713 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 19:20:00.624030 1084713 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 19:20:00.624043 1084713 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 19:20:00.624052 1084713 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 19:20:00.624061 1084713 command_runner.go:130] > # metrics_collectors = [
	I0717 19:20:00.624070 1084713 command_runner.go:130] > # 	"operations",
	I0717 19:20:00.624085 1084713 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 19:20:00.624096 1084713 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 19:20:00.624107 1084713 command_runner.go:130] > # 	"operations_errors",
	I0717 19:20:00.624117 1084713 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 19:20:00.624127 1084713 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 19:20:00.624137 1084713 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 19:20:00.624146 1084713 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 19:20:00.624151 1084713 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 19:20:00.624155 1084713 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 19:20:00.624168 1084713 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 19:20:00.624179 1084713 command_runner.go:130] > # 	"containers_oom_total",
	I0717 19:20:00.624187 1084713 command_runner.go:130] > # 	"containers_oom",
	I0717 19:20:00.624197 1084713 command_runner.go:130] > # 	"processes_defunct",
	I0717 19:20:00.624211 1084713 command_runner.go:130] > # 	"operations_total",
	I0717 19:20:00.624221 1084713 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 19:20:00.624231 1084713 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 19:20:00.624240 1084713 command_runner.go:130] > # 	"operations_errors_total",
	I0717 19:20:00.624249 1084713 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 19:20:00.624254 1084713 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 19:20:00.624264 1084713 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 19:20:00.624271 1084713 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 19:20:00.624283 1084713 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 19:20:00.624291 1084713 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 19:20:00.624300 1084713 command_runner.go:130] > # ]
	I0717 19:20:00.624311 1084713 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 19:20:00.624321 1084713 command_runner.go:130] > # metrics_port = 9090
	I0717 19:20:00.624330 1084713 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 19:20:00.624343 1084713 command_runner.go:130] > # metrics_socket = ""
	I0717 19:20:00.624353 1084713 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 19:20:00.624363 1084713 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 19:20:00.624377 1084713 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 19:20:00.624388 1084713 command_runner.go:130] > # certificate on any modification event.
	I0717 19:20:00.624395 1084713 command_runner.go:130] > # metrics_cert = ""
	I0717 19:20:00.624408 1084713 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 19:20:00.624418 1084713 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 19:20:00.624428 1084713 command_runner.go:130] > # metrics_key = ""
	I0717 19:20:00.624440 1084713 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 19:20:00.624449 1084713 command_runner.go:130] > [crio.tracing]
	I0717 19:20:00.624458 1084713 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 19:20:00.624466 1084713 command_runner.go:130] > # enable_tracing = false
	I0717 19:20:00.624478 1084713 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 19:20:00.624490 1084713 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 19:20:00.624502 1084713 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 19:20:00.624513 1084713 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 19:20:00.624526 1084713 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 19:20:00.624541 1084713 command_runner.go:130] > [crio.stats]
	I0717 19:20:00.624553 1084713 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 19:20:00.624561 1084713 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 19:20:00.624568 1084713 command_runner.go:130] > # stats_collection_period = 0
	I0717 19:20:00.624628 1084713 command_runner.go:130] ! time="2023-07-17 19:20:00.561478568Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0717 19:20:00.624663 1084713 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 19:20:00.624867 1084713 cni.go:84] Creating CNI manager for ""
	I0717 19:20:00.624898 1084713 cni.go:137] 3 nodes found, recommending kindnet
	I0717 19:20:00.624969 1084713 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:20:00.625007 1084713 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-464644 NodeName:multinode-464644 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:20:00.625193 1084713 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-464644"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:20:00.625296 1084713 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-464644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-464644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:20:00.625373 1084713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:20:00.635895 1084713 command_runner.go:130] > kubeadm
	I0717 19:20:00.635932 1084713 command_runner.go:130] > kubectl
	I0717 19:20:00.635939 1084713 command_runner.go:130] > kubelet
	I0717 19:20:00.635990 1084713 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:20:00.636048 1084713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:20:00.646447 1084713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0717 19:20:00.663747 1084713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:20:00.681728 1084713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0717 19:20:00.700680 1084713 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0717 19:20:00.704745 1084713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:20:00.717706 1084713 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644 for IP: 192.168.39.174
	I0717 19:20:00.717755 1084713 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:20:00.718004 1084713 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:20:00.718083 1084713 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:20:00.718238 1084713 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key
	I0717 19:20:00.718335 1084713 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.key.4baccf75
	I0717 19:20:00.718397 1084713 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.key
	I0717 19:20:00.718414 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 19:20:00.718436 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 19:20:00.718458 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 19:20:00.718478 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 19:20:00.718503 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 19:20:00.718522 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 19:20:00.718539 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 19:20:00.718556 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 19:20:00.718653 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:20:00.718703 1084713 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:20:00.718724 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:20:00.718762 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:20:00.718802 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:20:00.718832 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:20:00.718898 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:20:00.718938 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem -> /usr/share/ca-certificates/1068954.pem
	I0717 19:20:00.718959 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> /usr/share/ca-certificates/10689542.pem
	I0717 19:20:00.718981 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:20:00.720100 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:20:00.746350 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:20:00.773393 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:20:00.800456 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:20:00.828495 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:20:00.859953 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:20:00.888505 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:20:00.920338 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:20:00.953578 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:20:00.985494 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:20:01.014545 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:20:01.043285 1084713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:20:01.062873 1084713 ssh_runner.go:195] Run: openssl version
	I0717 19:20:01.070344 1084713 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0717 19:20:01.070462 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:20:01.082541 1084713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:20:01.088273 1084713 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:20:01.088315 1084713 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:20:01.088388 1084713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:20:01.094782 1084713 command_runner.go:130] > 51391683
	I0717 19:20:01.094986 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:20:01.106515 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:20:01.119068 1084713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:20:01.124716 1084713 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:20:01.124784 1084713 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:20:01.124840 1084713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:20:01.131447 1084713 command_runner.go:130] > 3ec20f2e
	I0717 19:20:01.131538 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:20:01.142870 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:20:01.154208 1084713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:20:01.160271 1084713 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:20:01.160321 1084713 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:20:01.160384 1084713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:20:01.167363 1084713 command_runner.go:130] > b5213941
	I0717 19:20:01.167507 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:20:01.179809 1084713 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:20:01.184984 1084713 command_runner.go:130] > ca.crt
	I0717 19:20:01.185008 1084713 command_runner.go:130] > ca.key
	I0717 19:20:01.185014 1084713 command_runner.go:130] > healthcheck-client.crt
	I0717 19:20:01.185018 1084713 command_runner.go:130] > healthcheck-client.key
	I0717 19:20:01.185043 1084713 command_runner.go:130] > peer.crt
	I0717 19:20:01.185049 1084713 command_runner.go:130] > peer.key
	I0717 19:20:01.185054 1084713 command_runner.go:130] > server.crt
	I0717 19:20:01.185059 1084713 command_runner.go:130] > server.key
	I0717 19:20:01.185181 1084713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:20:01.192634 1084713 command_runner.go:130] > Certificate will not expire
	I0717 19:20:01.192796 1084713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:20:01.199836 1084713 command_runner.go:130] > Certificate will not expire
	I0717 19:20:01.199944 1084713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:20:01.207454 1084713 command_runner.go:130] > Certificate will not expire
	I0717 19:20:01.207566 1084713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:20:01.214554 1084713 command_runner.go:130] > Certificate will not expire
	I0717 19:20:01.214664 1084713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:20:01.221703 1084713 command_runner.go:130] > Certificate will not expire
	I0717 19:20:01.221809 1084713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:20:01.228789 1084713 command_runner.go:130] > Certificate will not expire
	I0717 19:20:01.228891 1084713 kubeadm.go:404] StartCluster: {Name:multinode-464644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-464644 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.49 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.247 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0}
	I0717 19:20:01.229083 1084713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:20:01.229157 1084713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:20:01.263892 1084713 cri.go:89] found id: ""
	I0717 19:20:01.263997 1084713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:20:01.274622 1084713 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0717 19:20:01.274658 1084713 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0717 19:20:01.274668 1084713 command_runner.go:130] > /var/lib/minikube/etcd:
	I0717 19:20:01.274673 1084713 command_runner.go:130] > member
	I0717 19:20:01.274705 1084713 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:20:01.274714 1084713 kubeadm.go:636] restartCluster start
	I0717 19:20:01.274818 1084713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:20:01.285033 1084713 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:01.285727 1084713 kubeconfig.go:92] found "multinode-464644" server: "https://192.168.39.174:8443"
	I0717 19:20:01.286398 1084713 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:20:01.286836 1084713 kapi.go:59] client config for multinode-464644: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:20:01.287790 1084713 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 19:20:01.288282 1084713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:20:01.298470 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:01.298556 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:01.312419 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:01.813268 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:01.813399 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:01.828041 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:02.313235 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:02.514591 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:02.527503 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:02.812860 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:02.813000 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:02.825204 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:03.312734 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:03.312870 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:03.324972 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:03.812570 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:03.812671 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:03.825052 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:04.312635 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:04.312775 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:04.324883 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:04.813615 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:04.813706 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:04.826766 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:05.312818 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:05.312923 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:05.325574 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:05.812922 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:05.813079 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:05.824992 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:06.312584 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:06.312721 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:06.324770 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:06.813367 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:06.813477 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:06.825270 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:07.313607 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:07.313697 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:07.325545 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:07.813202 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:07.813327 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:07.826233 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:08.312800 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:08.312887 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:08.324555 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:08.813236 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:08.813366 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:08.825093 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:09.312676 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:09.312772 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:09.325313 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:09.812795 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:09.812892 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:09.825241 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:10.313393 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:10.313528 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:10.325827 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:10.813485 1084713 api_server.go:166] Checking apiserver status ...
	I0717 19:20:10.813643 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:20:10.827088 1084713 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:20:11.299084 1084713 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:20:11.299138 1084713 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:20:11.299154 1084713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:20:11.299265 1084713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:20:11.333164 1084713 cri.go:89] found id: ""
	I0717 19:20:11.333268 1084713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:20:11.349071 1084713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:20:11.358601 1084713 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0717 19:20:11.358649 1084713 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0717 19:20:11.358661 1084713 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0717 19:20:11.358671 1084713 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:20:11.358714 1084713 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:20:11.358779 1084713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:20:11.369442 1084713 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:20:11.369481 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:20:11.482857 1084713 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:20:11.483172 1084713 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0717 19:20:11.483838 1084713 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0717 19:20:11.484372 1084713 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:20:11.484996 1084713 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0717 19:20:11.485535 1084713 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:20:11.486518 1084713 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0717 19:20:11.486978 1084713 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0717 19:20:11.487528 1084713 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:20:11.487930 1084713 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:20:11.488483 1084713 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:20:11.489220 1084713 command_runner.go:130] > [certs] Using the existing "sa" key
	I0717 19:20:11.490796 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:20:11.543745 1084713 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:20:11.715951 1084713 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:20:11.838372 1084713 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:20:12.498630 1084713 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:20:12.769286 1084713 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:20:12.772220 1084713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.281388761s)
	I0717 19:20:12.772276 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:20:12.840569 1084713 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:20:12.842739 1084713 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:20:12.845426 1084713 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 19:20:12.984829 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:20:13.066606 1084713 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:20:13.066631 1084713 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:20:13.066637 1084713 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:20:13.066646 1084713 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:20:13.066672 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:20:13.134986 1084713 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:20:13.143335 1084713 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:20:13.143438 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:20:13.659334 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:20:14.158927 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:20:14.658686 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:20:15.159701 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:20:15.658929 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:20:15.679252 1084713 command_runner.go:130] > 1071
	I0717 19:20:15.683091 1084713 api_server.go:72] duration metric: took 2.539755671s to wait for apiserver process to appear ...
	I0717 19:20:15.683138 1084713 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:20:15.683158 1084713 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0717 19:20:15.683749 1084713 api_server.go:269] stopped: https://192.168.39.174:8443/healthz: Get "https://192.168.39.174:8443/healthz": dial tcp 192.168.39.174:8443: connect: connection refused
	I0717 19:20:16.184544 1084713 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0717 19:20:19.976476 1084713 api_server.go:279] https://192.168.39.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:20:19.976515 1084713 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:20:19.976530 1084713 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0717 19:20:20.010768 1084713 api_server.go:279] https://192.168.39.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:20:20.010843 1084713 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:20:20.184686 1084713 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0717 19:20:20.190407 1084713 api_server.go:279] https://192.168.39.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:20:20.190447 1084713 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:20:20.684805 1084713 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0717 19:20:20.693811 1084713 api_server.go:279] https://192.168.39.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:20:20.693861 1084713 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:20:21.184139 1084713 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0717 19:20:21.193761 1084713 api_server.go:279] https://192.168.39.174:8443/healthz returned 200:
	ok
	I0717 19:20:21.193920 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/version
	I0717 19:20:21.193934 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:21.193946 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:21.193960 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:21.208392 1084713 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0717 19:20:21.208417 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:21.208425 1084713 round_trippers.go:580]     Audit-Id: f1b9106d-d102-418c-b7be-16bac7cd0be6
	I0717 19:20:21.208431 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:21.208436 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:21.208442 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:21.208447 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:21.208453 1084713 round_trippers.go:580]     Content-Length: 263
	I0717 19:20:21.208461 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:21 GMT
	I0717 19:20:21.208484 1084713 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0717 19:20:21.208584 1084713 api_server.go:141] control plane version: v1.27.3
	I0717 19:20:21.208603 1084713 api_server.go:131] duration metric: took 5.525458909s to wait for apiserver health ...
	I0717 19:20:21.208614 1084713 cni.go:84] Creating CNI manager for ""
	I0717 19:20:21.208626 1084713 cni.go:137] 3 nodes found, recommending kindnet
	I0717 19:20:21.211262 1084713 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 19:20:21.213465 1084713 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 19:20:21.230904 1084713 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 19:20:21.230935 1084713 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0717 19:20:21.230943 1084713 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0717 19:20:21.230954 1084713 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:20:21.230963 1084713 command_runner.go:130] > Access: 2023-07-17 19:19:47.710331536 +0000
	I0717 19:20:21.230970 1084713 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0717 19:20:21.230978 1084713 command_runner.go:130] > Change: 2023-07-17 19:19:45.751331536 +0000
	I0717 19:20:21.230985 1084713 command_runner.go:130] >  Birth: -
	I0717 19:20:21.231087 1084713 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 19:20:21.231100 1084713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 19:20:21.306875 1084713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 19:20:22.549075 1084713 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0717 19:20:22.563857 1084713 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0717 19:20:22.567500 1084713 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0717 19:20:22.597824 1084713 command_runner.go:130] > daemonset.apps/kindnet configured
	I0717 19:20:22.600959 1084713 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.294029799s)
	I0717 19:20:22.601016 1084713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:20:22.601202 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:20:22.601217 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:22.601229 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:22.601238 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:22.609151 1084713 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 19:20:22.609190 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:22.609200 1084713 round_trippers.go:580]     Audit-Id: c7dfdc5d-bda8-4b0d-8c80-07f57d751a22
	I0717 19:20:22.609206 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:22.609211 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:22.609217 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:22.609222 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:22.609235 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:22 GMT
	I0717 19:20:22.612126 1084713 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"781"},"items":[{"metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"757","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82641 chars]
	I0717 19:20:22.616221 1084713 system_pods.go:59] 12 kube-system pods found
	I0717 19:20:22.616282 1084713 system_pods.go:61] "coredns-5d78c9869d-wqj4s" [a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:20:22.616296 1084713 system_pods.go:61] "etcd-multinode-464644" [b672d395-d32d-4198-b486-d9cff48d8b9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:20:22.616307 1084713 system_pods.go:61] "kindnet-2tp5c" [4e4881b0-4a20-4588-a87b-d2ba9c9b6939] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 19:20:22.616316 1084713 system_pods.go:61] "kindnet-t77xh" [94cb9b0b-58b4-45cc-b6f1-1ca459aed7bc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 19:20:22.616323 1084713 system_pods.go:61] "kindnet-znndf" [94e12556-bc64-4780-b11d-5f8009f953c0] Running
	I0717 19:20:22.616334 1084713 system_pods.go:61] "kube-apiserver-multinode-464644" [dd6e14e2-0b92-42b9-b6a2-1562c2c70903] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:20:22.616349 1084713 system_pods.go:61] "kube-controller-manager-multinode-464644" [6b598e8b-6c96-4014-b0a2-de37f107a0e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:20:22.616367 1084713 system_pods.go:61] "kube-proxy-56qvt" [8207802f-ef88-4f7f-871c-bc528ef98b58] Running
	I0717 19:20:22.616374 1084713 system_pods.go:61] "kube-proxy-j6ds6" [439bb5b7-0e46-4762-a9a7-e648a212ad93] Running
	I0717 19:20:22.616381 1084713 system_pods.go:61] "kube-proxy-qwsn5" [50e3f5e0-00d9-4412-b4de-649bc29733e9] Running
	I0717 19:20:22.616393 1084713 system_pods.go:61] "kube-scheduler-multinode-464644" [04e5660d-abb0-432a-861e-c5c242edfb98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:20:22.616401 1084713 system_pods.go:61] "storage-provisioner" [bd46cf29-49d3-4c0a-908e-a323a525d8d5] Running
	I0717 19:20:22.616410 1084713 system_pods.go:74] duration metric: took 15.385486ms to wait for pod list to return data ...
	I0717 19:20:22.616423 1084713 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:20:22.616520 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes
	I0717 19:20:22.616529 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:22.616540 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:22.616550 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:22.620570 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:22.620609 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:22.620621 1084713 round_trippers.go:580]     Audit-Id: 5fbd720e-c0de-4d4a-b1af-729814d1af63
	I0717 19:20:22.620632 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:22.620640 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:22.620649 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:22.620662 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:22.620676 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:22 GMT
	I0717 19:20:22.621604 1084713 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"781"},"items":[{"metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"746","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15371 chars]
	I0717 19:20:22.622519 1084713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:20:22.622549 1084713 node_conditions.go:123] node cpu capacity is 2
	I0717 19:20:22.622565 1084713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:20:22.622572 1084713 node_conditions.go:123] node cpu capacity is 2
	I0717 19:20:22.622577 1084713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:20:22.622583 1084713 node_conditions.go:123] node cpu capacity is 2
	I0717 19:20:22.622588 1084713 node_conditions.go:105] duration metric: took 6.158236ms to run NodePressure ...
	I0717 19:20:22.622623 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:20:22.814841 1084713 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0717 19:20:22.904810 1084713 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0717 19:20:22.906442 1084713 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:20:22.906598 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0717 19:20:22.906612 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:22.906624 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:22.906634 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:22.915231 1084713 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 19:20:22.915269 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:22.915279 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:22.915287 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:22.915295 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:22.915342 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:22.915353 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:22 GMT
	I0717 19:20:22.915363 1084713 round_trippers.go:580]     Audit-Id: dafe85eb-43f3-4ca6-bf3e-21392816a944
	I0717 19:20:22.916246 1084713 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"783"},"items":[{"metadata":{"name":"etcd-multinode-464644","namespace":"kube-system","uid":"b672d395-d32d-4198-b486-d9cff48d8b9a","resourceVersion":"752","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.mirror":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.seen":"2023-07-17T19:09:54.339578401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0717 19:20:22.917317 1084713 kubeadm.go:787] kubelet initialised
	I0717 19:20:22.917341 1084713 kubeadm.go:788] duration metric: took 10.867201ms waiting for restarted kubelet to initialise ...
	I0717 19:20:22.917352 1084713 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:20:22.917434 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:20:22.917444 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:22.917456 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:22.917467 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:22.923829 1084713 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 19:20:22.923861 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:22.923872 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:22.923881 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:22 GMT
	I0717 19:20:22.923893 1084713 round_trippers.go:580]     Audit-Id: 1e624386-9989-4b28-8e17-50342fb05e91
	I0717 19:20:22.923900 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:22.923908 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:22.923917 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:22.924917 1084713 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"783"},"items":[{"metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"757","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82641 chars]
	I0717 19:20:22.927565 1084713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:22.927693 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:20:22.927705 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:22.927713 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:22.927719 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:22.931877 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:20:22.931903 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:22.931913 1084713 round_trippers.go:580]     Audit-Id: 669c683e-441d-418e-9e4a-1b021d1feb56
	I0717 19:20:22.931921 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:22.931929 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:22.931935 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:22.931942 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:22.931950 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:22 GMT
	I0717 19:20:22.932733 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"757","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 19:20:22.933317 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:22.933334 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:22.933342 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:22.933348 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:22.936264 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:22.936290 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:22.936310 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:22 GMT
	I0717 19:20:22.936319 1084713 round_trippers.go:580]     Audit-Id: 65149b1e-7d46-4efb-8e9d-59333e35c076
	I0717 19:20:22.936327 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:22.936335 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:22.936343 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:22.936352 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:22.936607 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"746","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 19:20:22.937084 1084713 pod_ready.go:97] node "multinode-464644" hosting pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-464644" has status "Ready":"False"
	I0717 19:20:22.937112 1084713 pod_ready.go:81] duration metric: took 9.515253ms waiting for pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace to be "Ready" ...
	E0717 19:20:22.937123 1084713 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-464644" hosting pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-464644" has status "Ready":"False"
	I0717 19:20:22.937138 1084713 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:22.937236 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-464644
	I0717 19:20:22.937248 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:22.937259 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:22.937273 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:22.939733 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:22.939764 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:22.939776 1084713 round_trippers.go:580]     Audit-Id: 9fd86055-75a1-4d91-b6fb-e93e111c7204
	I0717 19:20:22.939784 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:22.939791 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:22.939800 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:22.939809 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:22.939823 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:22 GMT
	I0717 19:20:22.940044 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-464644","namespace":"kube-system","uid":"b672d395-d32d-4198-b486-d9cff48d8b9a","resourceVersion":"752","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.mirror":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.seen":"2023-07-17T19:09:54.339578401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0717 19:20:22.940476 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:22.940491 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:22.940503 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:22.940512 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:22.943239 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:22.943266 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:22.943280 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:22.943287 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:22 GMT
	I0717 19:20:22.943292 1084713 round_trippers.go:580]     Audit-Id: 1034e928-0326-4fc1-b171-e583c74fbe6a
	I0717 19:20:22.943300 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:22.943308 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:22.943321 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:22.943501 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"746","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 19:20:22.943963 1084713 pod_ready.go:97] node "multinode-464644" hosting pod "etcd-multinode-464644" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-464644" has status "Ready":"False"
	I0717 19:20:22.943990 1084713 pod_ready.go:81] duration metric: took 6.843661ms waiting for pod "etcd-multinode-464644" in "kube-system" namespace to be "Ready" ...
	E0717 19:20:22.944000 1084713 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-464644" hosting pod "etcd-multinode-464644" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-464644" has status "Ready":"False"
	I0717 19:20:22.944013 1084713 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:22.944079 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-464644
	I0717 19:20:22.944086 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:22.944093 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:22.944099 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:22.947019 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:22.947042 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:22.947052 1084713 round_trippers.go:580]     Audit-Id: 74c62b3b-09eb-450a-a4b0-de5021e2f0a7
	I0717 19:20:22.947061 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:22.947069 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:22.947075 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:22.947083 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:22.947091 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:22 GMT
	I0717 19:20:22.947260 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-464644","namespace":"kube-system","uid":"dd6e14e2-0b92-42b9-b6a2-1562c2c70903","resourceVersion":"753","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.174:8443","kubernetes.io/config.hash":"b280034e13df00701aec7afc575fcc6c","kubernetes.io/config.mirror":"b280034e13df00701aec7afc575fcc6c","kubernetes.io/config.seen":"2023-07-17T19:09:54.339586957Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0717 19:20:22.947769 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:22.947784 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:22.947794 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:22.947803 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:22.950318 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:22.950344 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:22.950355 1084713 round_trippers.go:580]     Audit-Id: 9eb8f5b3-764f-40cc-a3fe-c27b01f909ee
	I0717 19:20:22.950364 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:22.950372 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:22.950387 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:22.950396 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:22.950407 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:22 GMT
	I0717 19:20:22.950605 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"746","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 19:20:22.951083 1084713 pod_ready.go:97] node "multinode-464644" hosting pod "kube-apiserver-multinode-464644" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-464644" has status "Ready":"False"
	I0717 19:20:22.951107 1084713 pod_ready.go:81] duration metric: took 7.082477ms waiting for pod "kube-apiserver-multinode-464644" in "kube-system" namespace to be "Ready" ...
	E0717 19:20:22.951119 1084713 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-464644" hosting pod "kube-apiserver-multinode-464644" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-464644" has status "Ready":"False"
	I0717 19:20:22.951131 1084713 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:22.951219 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-464644
	I0717 19:20:22.951228 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:22.951239 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:22.951249 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:22.955274 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:20:22.955302 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:22.955312 1084713 round_trippers.go:580]     Audit-Id: c31068ea-641f-433b-be05-53eba991fa36
	I0717 19:20:22.955321 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:22.955330 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:22.955338 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:22.955347 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:22.955355 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:22 GMT
	I0717 19:20:22.955584 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-464644","namespace":"kube-system","uid":"6b598e8b-6c96-4014-b0a2-de37f107a0e9","resourceVersion":"750","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"323b8f41b30f0969feab8ff61a3ecabd","kubernetes.io/config.mirror":"323b8f41b30f0969feab8ff61a3ecabd","kubernetes.io/config.seen":"2023-07-17T19:09:54.339588566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0717 19:20:23.001357 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:23.001395 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:23.001409 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:23.001419 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:23.007841 1084713 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 19:20:23.007870 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:23.007878 1084713 round_trippers.go:580]     Audit-Id: 38c0621b-8199-4276-b5c4-cb80fa08e3d9
	I0717 19:20:23.007883 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:23.007897 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:23.007905 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:23.007913 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:23.007921 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:22 GMT
	I0717 19:20:23.008041 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"746","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 19:20:23.008380 1084713 pod_ready.go:97] node "multinode-464644" hosting pod "kube-controller-manager-multinode-464644" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-464644" has status "Ready":"False"
	I0717 19:20:23.008399 1084713 pod_ready.go:81] duration metric: took 57.257603ms waiting for pod "kube-controller-manager-multinode-464644" in "kube-system" namespace to be "Ready" ...
	E0717 19:20:23.008407 1084713 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-464644" hosting pod "kube-controller-manager-multinode-464644" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-464644" has status "Ready":"False"
	I0717 19:20:23.008419 1084713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-56qvt" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:23.201935 1084713 request.go:628] Waited for 193.423719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56qvt
	I0717 19:20:23.202004 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56qvt
	I0717 19:20:23.202009 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:23.202018 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:23.202024 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:23.205833 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:23.205869 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:23.205880 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:23.205886 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:23.205891 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:23.205897 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:23.205902 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:23 GMT
	I0717 19:20:23.205908 1084713 round_trippers.go:580]     Audit-Id: 4f52e13b-f3cc-4a52-a054-e0348ca2f2de
	I0717 19:20:23.206121 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-56qvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"8207802f-ef88-4f7f-871c-bc528ef98b58","resourceVersion":"721","creationTimestamp":"2023-07-17T19:11:40Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:11:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0717 19:20:23.402051 1084713 request.go:628] Waited for 195.419169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m03
	I0717 19:20:23.402132 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m03
	I0717 19:20:23.402140 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:23.402152 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:23.402159 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:23.405259 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:23.405287 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:23.405297 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:23.405305 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:23.405312 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:23.405321 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:23.405327 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:23 GMT
	I0717 19:20:23.405334 1084713 round_trippers.go:580]     Audit-Id: 3c6cb2fd-1b4f-4ee4-8e42-23bcc039e4a5
	I0717 19:20:23.405551 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m03","uid":"78befe00-f3c3-4f9c-86ff-aea572ef1c48","resourceVersion":"744","creationTimestamp":"2023-07-17T19:12:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:12:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3533 chars]
	I0717 19:20:23.405949 1084713 pod_ready.go:92] pod "kube-proxy-56qvt" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:23.405969 1084713 pod_ready.go:81] duration metric: took 397.542696ms waiting for pod "kube-proxy-56qvt" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:23.405982 1084713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j6ds6" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:23.601420 1084713 request.go:628] Waited for 195.326168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6ds6
	I0717 19:20:23.601503 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6ds6
	I0717 19:20:23.601511 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:23.601521 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:23.601545 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:23.604529 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:23.604556 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:23.604564 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:23.604570 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:23.604575 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:23.604580 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:23 GMT
	I0717 19:20:23.604585 1084713 round_trippers.go:580]     Audit-Id: 9a59d17e-ca35-4a68-b3b8-99367f85bc18
	I0717 19:20:23.604590 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:23.604747 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j6ds6","generateName":"kube-proxy-","namespace":"kube-system","uid":"439bb5b7-0e46-4762-a9a7-e648a212ad93","resourceVersion":"518","creationTimestamp":"2023-07-17T19:10:52Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0717 19:20:23.801682 1084713 request.go:628] Waited for 196.470513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:20:23.801759 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:20:23.801785 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:23.801802 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:23.801816 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:23.805454 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:23.805487 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:23.805499 1084713 round_trippers.go:580]     Audit-Id: 336e57e4-4d3b-4009-99bf-abc8ecd46fa5
	I0717 19:20:23.805507 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:23.805521 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:23.805532 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:23.805540 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:23.805549 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:23 GMT
	I0717 19:20:23.805692 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"743","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3683 chars]
	I0717 19:20:23.805990 1084713 pod_ready.go:92] pod "kube-proxy-j6ds6" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:23.806020 1084713 pod_ready.go:81] duration metric: took 400.013423ms waiting for pod "kube-proxy-j6ds6" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:23.806037 1084713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qwsn5" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:24.001513 1084713 request.go:628] Waited for 195.354167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qwsn5
	I0717 19:20:24.001606 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qwsn5
	I0717 19:20:24.001614 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:24.001626 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:24.001635 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:24.005521 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:24.005551 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:24.005586 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:24.005595 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:24.005602 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:24.005610 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:24.005618 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:23 GMT
	I0717 19:20:24.005625 1084713 round_trippers.go:580]     Audit-Id: 8987b6d7-a1c6-41ed-ad38-ce3b5d29b835
	I0717 19:20:24.005747 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qwsn5","generateName":"kube-proxy-","namespace":"kube-system","uid":"50e3f5e0-00d9-4412-b4de-649bc29733e9","resourceVersion":"776","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 19:20:24.201647 1084713 request.go:628] Waited for 195.407147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:24.201709 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:24.201714 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:24.201723 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:24.201729 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:24.205736 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:24.205766 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:24.205775 1084713 round_trippers.go:580]     Audit-Id: 332f5a14-5b06-43b7-bf2b-62414b6dc814
	I0717 19:20:24.205784 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:24.205791 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:24.205800 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:24.205807 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:24.205814 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:24 GMT
	I0717 19:20:24.206008 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"746","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 19:20:24.206395 1084713 pod_ready.go:97] node "multinode-464644" hosting pod "kube-proxy-qwsn5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-464644" has status "Ready":"False"
	I0717 19:20:24.206419 1084713 pod_ready.go:81] duration metric: took 400.367894ms waiting for pod "kube-proxy-qwsn5" in "kube-system" namespace to be "Ready" ...
	E0717 19:20:24.206431 1084713 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-464644" hosting pod "kube-proxy-qwsn5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-464644" has status "Ready":"False"
	I0717 19:20:24.206445 1084713 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:24.401944 1084713 request.go:628] Waited for 195.415092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-464644
	I0717 19:20:24.402021 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-464644
	I0717 19:20:24.402027 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:24.402035 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:24.402041 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:24.405353 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:24.405386 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:24.405398 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:24.405409 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:24 GMT
	I0717 19:20:24.405419 1084713 round_trippers.go:580]     Audit-Id: 9389e41b-d1a5-4aa7-999c-07e3fd880b93
	I0717 19:20:24.405430 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:24.405437 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:24.405442 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:24.405615 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-464644","namespace":"kube-system","uid":"04e5660d-abb0-432a-861e-c5c242edfb98","resourceVersion":"751","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6435a6b37c43f83175753c4199c85407","kubernetes.io/config.mirror":"6435a6b37c43f83175753c4199c85407","kubernetes.io/config.seen":"2023-07-17T19:09:54.339590320Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0717 19:20:24.601477 1084713 request.go:628] Waited for 195.341065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:24.601574 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:24.601585 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:24.601612 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:24.601619 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:24.604321 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:24.604354 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:24.604366 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:24.604375 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:24.604387 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:24 GMT
	I0717 19:20:24.604396 1084713 round_trippers.go:580]     Audit-Id: 07e3aec1-ee73-45e8-8369-48790d2b0c63
	I0717 19:20:24.604408 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:24.604415 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:24.604634 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"746","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 19:20:24.604988 1084713 pod_ready.go:97] node "multinode-464644" hosting pod "kube-scheduler-multinode-464644" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-464644" has status "Ready":"False"
	I0717 19:20:24.605010 1084713 pod_ready.go:81] duration metric: took 398.558159ms waiting for pod "kube-scheduler-multinode-464644" in "kube-system" namespace to be "Ready" ...
	E0717 19:20:24.605022 1084713 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-464644" hosting pod "kube-scheduler-multinode-464644" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-464644" has status "Ready":"False"
	I0717 19:20:24.605035 1084713 pod_ready.go:38] duration metric: took 1.687670913s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:20:24.605066 1084713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:20:24.618932 1084713 command_runner.go:130] > -16
	I0717 19:20:24.618979 1084713 ops.go:34] apiserver oom_adj: -16
	I0717 19:20:24.618989 1084713 kubeadm.go:640] restartCluster took 23.344267936s
	I0717 19:20:24.619002 1084713 kubeadm.go:406] StartCluster complete in 23.390132845s
	I0717 19:20:24.619026 1084713 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:20:24.619148 1084713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:20:24.619760 1084713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:20:24.620033 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:20:24.620250 1084713 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:20:24.624247 1084713 out.go:177] * Enabled addons: 
	I0717 19:20:24.620403 1084713 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:20:24.620413 1084713 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:20:24.626355 1084713 addons.go:502] enable addons completed in 6.079951ms: enabled=[]
	I0717 19:20:24.626617 1084713 kapi.go:59] client config for multinode-464644: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:20:24.627052 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 19:20:24.627066 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:24.627074 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:24.627080 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:24.630343 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:24.630367 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:24.630374 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:24.630379 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:24.630385 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:24.630391 1084713 round_trippers.go:580]     Content-Length: 291
	I0717 19:20:24.630396 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:24 GMT
	I0717 19:20:24.630402 1084713 round_trippers.go:580]     Audit-Id: be00e171-0202-44bd-9f9b-2596ee67c4aa
	I0717 19:20:24.630407 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:24.630435 1084713 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"06c3326f-def8-45bf-a91d-f07feefe253d","resourceVersion":"782","creationTimestamp":"2023-07-17T19:09:54Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0717 19:20:24.630606 1084713 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-464644" context rescaled to 1 replicas
	I0717 19:20:24.630636 1084713 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:20:24.633179 1084713 out.go:177] * Verifying Kubernetes components...
	I0717 19:20:24.635301 1084713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:20:24.740593 1084713 command_runner.go:130] > apiVersion: v1
	I0717 19:20:24.740629 1084713 command_runner.go:130] > data:
	I0717 19:20:24.740634 1084713 command_runner.go:130] >   Corefile: |
	I0717 19:20:24.740638 1084713 command_runner.go:130] >     .:53 {
	I0717 19:20:24.740642 1084713 command_runner.go:130] >         log
	I0717 19:20:24.740647 1084713 command_runner.go:130] >         errors
	I0717 19:20:24.740651 1084713 command_runner.go:130] >         health {
	I0717 19:20:24.740656 1084713 command_runner.go:130] >            lameduck 5s
	I0717 19:20:24.740659 1084713 command_runner.go:130] >         }
	I0717 19:20:24.740663 1084713 command_runner.go:130] >         ready
	I0717 19:20:24.740669 1084713 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0717 19:20:24.740673 1084713 command_runner.go:130] >            pods insecure
	I0717 19:20:24.740679 1084713 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0717 19:20:24.740683 1084713 command_runner.go:130] >            ttl 30
	I0717 19:20:24.740687 1084713 command_runner.go:130] >         }
	I0717 19:20:24.740691 1084713 command_runner.go:130] >         prometheus :9153
	I0717 19:20:24.740694 1084713 command_runner.go:130] >         hosts {
	I0717 19:20:24.740699 1084713 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0717 19:20:24.740702 1084713 command_runner.go:130] >            fallthrough
	I0717 19:20:24.740706 1084713 command_runner.go:130] >         }
	I0717 19:20:24.740710 1084713 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0717 19:20:24.740715 1084713 command_runner.go:130] >            max_concurrent 1000
	I0717 19:20:24.740719 1084713 command_runner.go:130] >         }
	I0717 19:20:24.740723 1084713 command_runner.go:130] >         cache 30
	I0717 19:20:24.740731 1084713 command_runner.go:130] >         loop
	I0717 19:20:24.740735 1084713 command_runner.go:130] >         reload
	I0717 19:20:24.740740 1084713 command_runner.go:130] >         loadbalance
	I0717 19:20:24.740743 1084713 command_runner.go:130] >     }
	I0717 19:20:24.740750 1084713 command_runner.go:130] > kind: ConfigMap
	I0717 19:20:24.740763 1084713 command_runner.go:130] > metadata:
	I0717 19:20:24.740768 1084713 command_runner.go:130] >   creationTimestamp: "2023-07-17T19:09:54Z"
	I0717 19:20:24.740772 1084713 command_runner.go:130] >   name: coredns
	I0717 19:20:24.740776 1084713 command_runner.go:130] >   namespace: kube-system
	I0717 19:20:24.740780 1084713 command_runner.go:130] >   resourceVersion: "398"
	I0717 19:20:24.740788 1084713 command_runner.go:130] >   uid: 13425687-4297-46fd-ae23-038f5de0a562
	I0717 19:20:24.740854 1084713 node_ready.go:35] waiting up to 6m0s for node "multinode-464644" to be "Ready" ...
	I0717 19:20:24.740899 1084713 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:20:24.802245 1084713 request.go:628] Waited for 61.275208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:24.802319 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:24.802326 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:24.802335 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:24.802345 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:24.806257 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:24.806286 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:24.806295 1084713 round_trippers.go:580]     Audit-Id: 3c24151a-e8ba-4a35-bb82-f29a4126c70f
	I0717 19:20:24.806301 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:24.806310 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:24.806320 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:24.806335 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:24.806345 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:24 GMT
	I0717 19:20:24.806508 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"746","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 19:20:25.307541 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:25.307572 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:25.307580 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:25.307586 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:25.310614 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:25.310640 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:25.310648 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:25.310653 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:25.310659 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:25 GMT
	I0717 19:20:25.310664 1084713 round_trippers.go:580]     Audit-Id: b6d50276-74c8-4355-a969-d5d6d7dabf91
	I0717 19:20:25.310669 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:25.310675 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:25.310850 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"746","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 19:20:25.807452 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:25.807481 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:25.807490 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:25.807496 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:25.810582 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:25.810617 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:25.810628 1084713 round_trippers.go:580]     Audit-Id: 94fadc69-bc2a-42ff-aafb-e5cfee2826e4
	I0717 19:20:25.810636 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:25.810645 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:25.810653 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:25.810660 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:25.810667 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:25 GMT
	I0717 19:20:25.810945 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:25.811383 1084713 node_ready.go:49] node "multinode-464644" has status "Ready":"True"
	I0717 19:20:25.811411 1084713 node_ready.go:38] duration metric: took 1.070529511s waiting for node "multinode-464644" to be "Ready" ...
	I0717 19:20:25.811423 1084713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:20:25.811527 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:20:25.811540 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:25.811551 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:25.811560 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:25.816303 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:20:25.816338 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:25.816350 1084713 round_trippers.go:580]     Audit-Id: 0dbba55c-ba34-4b8a-9e4d-4de384c34853
	I0717 19:20:25.816361 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:25.816369 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:25.816377 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:25.816385 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:25.816394 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:25 GMT
	I0717 19:20:25.817648 1084713 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"865"},"items":[{"metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"757","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82960 chars]
	I0717 19:20:25.820968 1084713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:25.821085 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:20:25.821099 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:25.821111 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:25.821124 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:25.824077 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:25.824116 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:25.824126 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:25.824135 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:25.824143 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:25 GMT
	I0717 19:20:25.824152 1084713 round_trippers.go:580]     Audit-Id: 3e9920fd-a578-4d76-a9ff-aa37b05bf0fa
	I0717 19:20:25.824163 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:25.824172 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:25.824293 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"757","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 19:20:25.824795 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:25.824810 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:25.824818 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:25.824824 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:25.827092 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:25.827116 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:25.827126 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:25.827136 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:25 GMT
	I0717 19:20:25.827144 1084713 round_trippers.go:580]     Audit-Id: 412cd2eb-2bf0-4307-8598-ec1a904bc588
	I0717 19:20:25.827153 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:25.827164 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:25.827183 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:25.827307 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:26.328602 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:20:26.328634 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:26.328644 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:26.328650 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:26.331482 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:26.331519 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:26.331531 1084713 round_trippers.go:580]     Audit-Id: 906c6268-137b-4115-bc03-6f8a3b336821
	I0717 19:20:26.331540 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:26.331548 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:26.331556 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:26.331563 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:26.331575 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:26 GMT
	I0717 19:20:26.331780 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"757","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 19:20:26.332419 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:26.332437 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:26.332445 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:26.332451 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:26.334861 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:26.334883 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:26.334894 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:26 GMT
	I0717 19:20:26.334910 1084713 round_trippers.go:580]     Audit-Id: 7cdbadc8-a895-4fdf-b303-6c8f9b15b6c8
	I0717 19:20:26.334919 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:26.334932 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:26.334942 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:26.334955 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:26.335087 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:26.828706 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:20:26.828732 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:26.828741 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:26.828747 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:26.834768 1084713 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 19:20:26.834803 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:26.834814 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:26.834823 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:26.834845 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:26.834854 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:26 GMT
	I0717 19:20:26.834868 1084713 round_trippers.go:580]     Audit-Id: e18c6ca3-a0db-44b3-bdc8-047a6006219e
	I0717 19:20:26.834882 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:26.835872 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"757","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 19:20:26.836581 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:26.836601 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:26.836609 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:26.836617 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:26.839742 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:26.839767 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:26.839777 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:26.839786 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:26.839794 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:26 GMT
	I0717 19:20:26.839803 1084713 round_trippers.go:580]     Audit-Id: eab51220-364a-4f3a-b8dd-bc1fe9a7f6b4
	I0717 19:20:26.839816 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:26.839825 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:26.840466 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:27.328745 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:20:27.328774 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:27.328783 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:27.328789 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:27.332300 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:27.332337 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:27.332349 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:27.332357 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:27.332366 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:27 GMT
	I0717 19:20:27.332375 1084713 round_trippers.go:580]     Audit-Id: 64ceea3c-e089-46a4-b20e-be02225aba3c
	I0717 19:20:27.332383 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:27.332391 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:27.332760 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"757","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 19:20:27.333447 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:27.333469 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:27.333481 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:27.333490 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:27.336446 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:27.336470 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:27.336477 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:27 GMT
	I0717 19:20:27.336483 1084713 round_trippers.go:580]     Audit-Id: 2aaeaef6-2fce-492b-bb90-81902689a22e
	I0717 19:20:27.336488 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:27.336496 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:27.336502 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:27.336507 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:27.336654 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:27.828181 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:20:27.828213 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:27.828221 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:27.828242 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:27.832862 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:20:27.832890 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:27.832898 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:27.832904 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:27.832909 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:27.832914 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:27.832920 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:27 GMT
	I0717 19:20:27.832927 1084713 round_trippers.go:580]     Audit-Id: 0b4a8b0f-f602-4680-b3df-ce212d2f4d99
	I0717 19:20:27.833067 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"757","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 19:20:27.833620 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:27.833633 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:27.833642 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:27.833649 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:27.836291 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:27.836315 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:27.836325 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:27.836333 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:27.836341 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:27 GMT
	I0717 19:20:27.836349 1084713 round_trippers.go:580]     Audit-Id: 290acf81-947b-4816-951a-2ccf41b14bc2
	I0717 19:20:27.836358 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:27.836368 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:27.836451 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:27.836779 1084713 pod_ready.go:102] pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:28.328127 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:20:28.328168 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:28.328182 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:28.328193 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:28.334315 1084713 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 19:20:28.334352 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:28.334365 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:28.334374 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:28.334382 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:28.334391 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:28 GMT
	I0717 19:20:28.334399 1084713 round_trippers.go:580]     Audit-Id: b785a817-a795-470f-88e7-92bdc763c732
	I0717 19:20:28.334407 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:28.334621 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"757","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 19:20:28.335287 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:28.335305 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:28.335317 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:28.335327 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:28.339589 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:20:28.339624 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:28.339636 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:28.339645 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:28.339655 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:28.339666 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:28 GMT
	I0717 19:20:28.339674 1084713 round_trippers.go:580]     Audit-Id: 370afb28-4cdc-4e41-b514-76f71c80db5f
	I0717 19:20:28.339687 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:28.339834 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:28.828344 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:20:28.828379 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:28.828393 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:28.828403 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:28.832392 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:28.832430 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:28.832441 1084713 round_trippers.go:580]     Audit-Id: 252654b0-6de8-4c8a-af6d-1349839da0ab
	I0717 19:20:28.832450 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:28.832458 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:28.832466 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:28.832475 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:28.832483 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:28 GMT
	I0717 19:20:28.832712 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"757","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 19:20:28.833355 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:28.833377 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:28.833389 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:28.833398 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:28.836456 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:28.836481 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:28.836492 1084713 round_trippers.go:580]     Audit-Id: 1dece8cf-5e68-440f-87cf-359a4f2e3a4f
	I0717 19:20:28.836501 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:28.836510 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:28.836519 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:28.836528 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:28.836541 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:28 GMT
	I0717 19:20:28.836716 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:29.328410 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:20:29.328446 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:29.328458 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:29.328464 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:29.331845 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:29.331885 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:29.331898 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:29.331908 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:29 GMT
	I0717 19:20:29.331917 1084713 round_trippers.go:580]     Audit-Id: c0e1abe4-62e9-4060-af42-bcf4d9dcc343
	I0717 19:20:29.331926 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:29.331935 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:29.331944 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:29.332141 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"757","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 19:20:29.332718 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:29.332738 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:29.332746 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:29.332752 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:29.335426 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:29.335449 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:29.335460 1084713 round_trippers.go:580]     Audit-Id: 26566a34-7645-4dca-b42e-354e1c507465
	I0717 19:20:29.335470 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:29.335479 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:29.335488 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:29.335498 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:29.335505 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:29 GMT
	I0717 19:20:29.335625 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:29.828172 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:20:29.828212 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:29.828237 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:29.828247 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:29.831603 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:29.831636 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:29.831645 1084713 round_trippers.go:580]     Audit-Id: 0230149f-e8b2-4c6d-8225-2596a43f2ddb
	I0717 19:20:29.831651 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:29.831657 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:29.831662 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:29.831668 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:29.831677 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:29 GMT
	I0717 19:20:29.831823 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"873","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0717 19:20:29.832621 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:29.832649 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:29.832659 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:29.832669 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:29.835812 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:29.835845 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:29.835857 1084713 round_trippers.go:580]     Audit-Id: 8f9b2d39-7b59-4097-8dd4-f6beca23d17c
	I0717 19:20:29.835867 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:29.835877 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:29.835887 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:29.835896 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:29.835904 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:29 GMT
	I0717 19:20:29.836047 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:29.836585 1084713 pod_ready.go:92] pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:29.836617 1084713 pod_ready.go:81] duration metric: took 4.015612765s waiting for pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:29.836643 1084713 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:29.836739 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-464644
	I0717 19:20:29.836750 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:29.836760 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:29.836771 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:29.839681 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:29.839702 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:29.839711 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:29.839717 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:29.839723 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:29 GMT
	I0717 19:20:29.839732 1084713 round_trippers.go:580]     Audit-Id: 4f3b5396-3892-47c4-bdb0-012c5f86e130
	I0717 19:20:29.839740 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:29.839748 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:29.840031 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-464644","namespace":"kube-system","uid":"b672d395-d32d-4198-b486-d9cff48d8b9a","resourceVersion":"752","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.mirror":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.seen":"2023-07-17T19:09:54.339578401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0717 19:20:29.840498 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:29.840513 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:29.840521 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:29.840527 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:29.844352 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:29.844383 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:29.844395 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:29 GMT
	I0717 19:20:29.844407 1084713 round_trippers.go:580]     Audit-Id: 1369b524-a1cb-42de-8852-e16dd5df9553
	I0717 19:20:29.844416 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:29.844424 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:29.844432 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:29.844441 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:29.845366 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:30.346776 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-464644
	I0717 19:20:30.346809 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:30.346822 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:30.346832 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:30.350094 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:30.350126 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:30.350136 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:30.350146 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:30 GMT
	I0717 19:20:30.350154 1084713 round_trippers.go:580]     Audit-Id: ceeee3b8-436e-48b2-83b2-246d5e9e16c0
	I0717 19:20:30.350161 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:30.350168 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:30.350176 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:30.350639 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-464644","namespace":"kube-system","uid":"b672d395-d32d-4198-b486-d9cff48d8b9a","resourceVersion":"752","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.mirror":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.seen":"2023-07-17T19:09:54.339578401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0717 19:20:30.351128 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:30.351140 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:30.351148 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:30.351155 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:30.353704 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:30.353727 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:30.353736 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:30 GMT
	I0717 19:20:30.353745 1084713 round_trippers.go:580]     Audit-Id: 0eaf7d96-b727-4d3f-aa16-8d6bff83a443
	I0717 19:20:30.353753 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:30.353764 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:30.353773 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:30.353787 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:30.353966 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:30.846699 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-464644
	I0717 19:20:30.846732 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:30.846746 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:30.846755 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:30.850148 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:30.850175 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:30.850182 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:30.850188 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:30.850194 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:30 GMT
	I0717 19:20:30.850199 1084713 round_trippers.go:580]     Audit-Id: 37730f79-ce35-4110-80bf-a3ad47d332fd
	I0717 19:20:30.850204 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:30.850216 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:30.850363 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-464644","namespace":"kube-system","uid":"b672d395-d32d-4198-b486-d9cff48d8b9a","resourceVersion":"752","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.mirror":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.seen":"2023-07-17T19:09:54.339578401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0717 19:20:30.850843 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:30.850861 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:30.850868 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:30.850874 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:30.853497 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:30.853520 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:30.853529 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:30.853537 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:30.853545 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:30 GMT
	I0717 19:20:30.853552 1084713 round_trippers.go:580]     Audit-Id: 7daf3f49-0d61-4601-ae89-a86633d415e1
	I0717 19:20:30.853578 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:30.853587 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:30.853757 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:31.346558 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-464644
	I0717 19:20:31.346586 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:31.346595 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:31.346601 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:31.350026 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:31.350055 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:31.350063 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:31.350068 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:31.350074 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:31 GMT
	I0717 19:20:31.350079 1084713 round_trippers.go:580]     Audit-Id: e1c5d1e0-f8f8-4507-be68-f671170d6ad2
	I0717 19:20:31.350084 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:31.350089 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:31.350204 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-464644","namespace":"kube-system","uid":"b672d395-d32d-4198-b486-d9cff48d8b9a","resourceVersion":"752","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.mirror":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.seen":"2023-07-17T19:09:54.339578401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0717 19:20:31.350635 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:31.350649 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:31.350658 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:31.350664 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:31.353334 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:31.353365 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:31.353376 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:31.353385 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:31 GMT
	I0717 19:20:31.353392 1084713 round_trippers.go:580]     Audit-Id: c8b9b43f-f598-495c-8482-97545c38572e
	I0717 19:20:31.353400 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:31.353407 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:31.353415 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:31.353553 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:31.846174 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-464644
	I0717 19:20:31.846203 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:31.846212 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:31.846219 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:31.849766 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:31.849797 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:31.849806 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:31.849812 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:31.849817 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:31.849823 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:31 GMT
	I0717 19:20:31.849828 1084713 round_trippers.go:580]     Audit-Id: 6b312e52-18b4-4cb2-b8b0-4e292090dab7
	I0717 19:20:31.849833 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:31.849943 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-464644","namespace":"kube-system","uid":"b672d395-d32d-4198-b486-d9cff48d8b9a","resourceVersion":"752","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.mirror":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.seen":"2023-07-17T19:09:54.339578401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0717 19:20:31.850461 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:31.850478 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:31.850486 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:31.850492 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:31.853087 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:31.853108 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:31.853115 1084713 round_trippers.go:580]     Audit-Id: a3442024-bf3f-4bb0-b26f-efbab6c0b687
	I0717 19:20:31.853121 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:31.853126 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:31.853131 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:31.853137 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:31.853144 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:31 GMT
	I0717 19:20:31.853305 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:31.853787 1084713 pod_ready.go:102] pod "etcd-multinode-464644" in "kube-system" namespace has status "Ready":"False"
	I0717 19:20:32.346115 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-464644
	I0717 19:20:32.346142 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:32.346152 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:32.346157 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:32.350686 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:20:32.350723 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:32.350734 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:32 GMT
	I0717 19:20:32.350743 1084713 round_trippers.go:580]     Audit-Id: 213ae4f3-2002-4046-ac53-0b9237f698c3
	I0717 19:20:32.350752 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:32.350761 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:32.350770 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:32.350779 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:32.350934 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-464644","namespace":"kube-system","uid":"b672d395-d32d-4198-b486-d9cff48d8b9a","resourceVersion":"884","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.mirror":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.seen":"2023-07-17T19:09:54.339578401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0717 19:20:32.351513 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:32.351535 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:32.351546 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:32.351555 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:32.355142 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:32.355170 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:32.355178 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:32.355183 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:32.355189 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:32.355194 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:32 GMT
	I0717 19:20:32.355199 1084713 round_trippers.go:580]     Audit-Id: 7e5b60df-c977-4180-af68-48033bb654bb
	I0717 19:20:32.355205 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:32.355501 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:32.355961 1084713 pod_ready.go:92] pod "etcd-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:32.355985 1084713 pod_ready.go:81] duration metric: took 2.519329734s waiting for pod "etcd-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:32.356003 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:32.356077 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-464644
	I0717 19:20:32.356086 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:32.356094 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:32.356100 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:32.361179 1084713 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 19:20:32.361206 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:32.361214 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:32.361226 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:32 GMT
	I0717 19:20:32.361232 1084713 round_trippers.go:580]     Audit-Id: 2982a5f2-1a90-4877-b0e8-15e4717d4df6
	I0717 19:20:32.361237 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:32.361243 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:32.361248 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:32.361380 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-464644","namespace":"kube-system","uid":"dd6e14e2-0b92-42b9-b6a2-1562c2c70903","resourceVersion":"867","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.174:8443","kubernetes.io/config.hash":"b280034e13df00701aec7afc575fcc6c","kubernetes.io/config.mirror":"b280034e13df00701aec7afc575fcc6c","kubernetes.io/config.seen":"2023-07-17T19:09:54.339586957Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0717 19:20:32.361934 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:32.361952 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:32.361959 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:32.361965 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:32.367074 1084713 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 19:20:32.367111 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:32.367122 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:32.367131 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:32.367140 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:32.367148 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:32 GMT
	I0717 19:20:32.367156 1084713 round_trippers.go:580]     Audit-Id: df156c72-f2f4-4b98-8acd-c6ed62ee49f7
	I0717 19:20:32.367169 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:32.367329 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:32.367804 1084713 pod_ready.go:92] pod "kube-apiserver-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:32.367835 1084713 pod_ready.go:81] duration metric: took 11.823872ms waiting for pod "kube-apiserver-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:32.367854 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:32.368001 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-464644
	I0717 19:20:32.368016 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:32.368027 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:32.368035 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:32.374133 1084713 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 19:20:32.374162 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:32.374170 1084713 round_trippers.go:580]     Audit-Id: bf9ac56b-1806-48b1-a16a-7e1377ea44de
	I0717 19:20:32.374176 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:32.374181 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:32.374187 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:32.374192 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:32.374197 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:32 GMT
	I0717 19:20:32.374314 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-464644","namespace":"kube-system","uid":"6b598e8b-6c96-4014-b0a2-de37f107a0e9","resourceVersion":"880","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"323b8f41b30f0969feab8ff61a3ecabd","kubernetes.io/config.mirror":"323b8f41b30f0969feab8ff61a3ecabd","kubernetes.io/config.seen":"2023-07-17T19:09:54.339588566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0717 19:20:32.374914 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:32.374938 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:32.374949 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:32.374958 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:32.378343 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:32.378365 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:32.378372 1084713 round_trippers.go:580]     Audit-Id: 208b3c1b-b0bc-4af8-a7ed-54ae92e32a46
	I0717 19:20:32.378378 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:32.378383 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:32.378389 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:32.378394 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:32.378399 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:32 GMT
	I0717 19:20:32.378558 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:32.379007 1084713 pod_ready.go:92] pod "kube-controller-manager-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:32.379037 1084713 pod_ready.go:81] duration metric: took 11.167935ms waiting for pod "kube-controller-manager-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:32.379051 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-56qvt" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:32.401436 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56qvt
	I0717 19:20:32.401474 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:32.401487 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:32.401500 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:32.404652 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:32.404689 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:32.404744 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:32 GMT
	I0717 19:20:32.404783 1084713 round_trippers.go:580]     Audit-Id: 8fbba9d8-04fc-4c06-958e-94996dd47fda
	I0717 19:20:32.404797 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:32.404809 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:32.404819 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:32.404831 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:32.404959 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-56qvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"8207802f-ef88-4f7f-871c-bc528ef98b58","resourceVersion":"721","creationTimestamp":"2023-07-17T19:11:40Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:11:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0717 19:20:32.602055 1084713 request.go:628] Waited for 196.480426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m03
	I0717 19:20:32.602146 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m03
	I0717 19:20:32.602154 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:32.602166 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:32.602182 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:32.605305 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:32.605340 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:32.605352 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:32.605361 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:32 GMT
	I0717 19:20:32.605370 1084713 round_trippers.go:580]     Audit-Id: d6ad4f9a-f2f2-4c3f-a131-e7081e2b0172
	I0717 19:20:32.605377 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:32.605385 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:32.605392 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:32.605547 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m03","uid":"78befe00-f3c3-4f9c-86ff-aea572ef1c48","resourceVersion":"744","creationTimestamp":"2023-07-17T19:12:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:12:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3533 chars]
	I0717 19:20:32.606042 1084713 pod_ready.go:92] pod "kube-proxy-56qvt" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:32.606064 1084713 pod_ready.go:81] duration metric: took 226.999167ms waiting for pod "kube-proxy-56qvt" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:32.606075 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6ds6" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:32.801523 1084713 request.go:628] Waited for 195.347203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6ds6
	I0717 19:20:32.801626 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6ds6
	I0717 19:20:32.801691 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:32.801708 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:32.801719 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:32.804713 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:20:32.804746 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:32.804757 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:32.804785 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:32 GMT
	I0717 19:20:32.804794 1084713 round_trippers.go:580]     Audit-Id: 674cbe5e-7c3b-45c7-a20a-1b128ca25876
	I0717 19:20:32.804803 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:32.804816 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:32.804824 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:32.804981 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j6ds6","generateName":"kube-proxy-","namespace":"kube-system","uid":"439bb5b7-0e46-4762-a9a7-e648a212ad93","resourceVersion":"518","creationTimestamp":"2023-07-17T19:10:52Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0717 19:20:33.001999 1084713 request.go:628] Waited for 196.416233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:20:33.002076 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:20:33.002081 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:33.002093 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:33.002101 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:33.006363 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:20:33.006406 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:33.006420 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:33.006429 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:33.006438 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:33.006459 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:32 GMT
	I0717 19:20:33.006468 1084713 round_trippers.go:580]     Audit-Id: e2bec351-aa46-4710-bcaf-c5ce5f74a96a
	I0717 19:20:33.006476 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:33.007326 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78","resourceVersion":"743","creationTimestamp":"2023-07-17T19:10:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3683 chars]
	I0717 19:20:33.007711 1084713 pod_ready.go:92] pod "kube-proxy-j6ds6" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:33.007733 1084713 pod_ready.go:81] duration metric: took 401.649639ms waiting for pod "kube-proxy-j6ds6" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:33.007750 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qwsn5" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:33.202293 1084713 request.go:628] Waited for 194.461014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qwsn5
	I0717 19:20:33.202378 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qwsn5
	I0717 19:20:33.202384 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:33.202392 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:33.202398 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:33.205994 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:33.206036 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:33.206047 1084713 round_trippers.go:580]     Audit-Id: 42983432-48e4-4115-a400-73f7afe5fb60
	I0717 19:20:33.206053 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:33.206058 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:33.206064 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:33.206069 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:33.206074 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:33 GMT
	I0717 19:20:33.206176 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qwsn5","generateName":"kube-proxy-","namespace":"kube-system","uid":"50e3f5e0-00d9-4412-b4de-649bc29733e9","resourceVersion":"776","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 19:20:33.402227 1084713 request.go:628] Waited for 195.460231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:33.402322 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:33.402331 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:33.402343 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:33.402354 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:33.406510 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:20:33.406539 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:33.406547 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:33.406553 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:33 GMT
	I0717 19:20:33.406558 1084713 round_trippers.go:580]     Audit-Id: 4205ee6d-1cb8-442d-8949-67d6ce003b5a
	I0717 19:20:33.406563 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:33.406569 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:33.406578 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:33.406697 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:33.407165 1084713 pod_ready.go:92] pod "kube-proxy-qwsn5" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:33.407189 1084713 pod_ready.go:81] duration metric: took 399.431173ms waiting for pod "kube-proxy-qwsn5" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:33.407200 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:33.601795 1084713 request.go:628] Waited for 194.463099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-464644
	I0717 19:20:33.601871 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-464644
	I0717 19:20:33.601876 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:33.601884 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:33.601891 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:33.605034 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:33.605070 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:33.605080 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:33.605087 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:33.605093 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:33.605099 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:33 GMT
	I0717 19:20:33.605105 1084713 round_trippers.go:580]     Audit-Id: 5729e646-5aea-4f6b-81ae-9337c7ca47cc
	I0717 19:20:33.605110 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:33.605222 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-464644","namespace":"kube-system","uid":"04e5660d-abb0-432a-861e-c5c242edfb98","resourceVersion":"894","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6435a6b37c43f83175753c4199c85407","kubernetes.io/config.mirror":"6435a6b37c43f83175753c4199c85407","kubernetes.io/config.seen":"2023-07-17T19:09:54.339590320Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0717 19:20:33.802213 1084713 request.go:628] Waited for 196.461592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:33.802278 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:20:33.802284 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:33.802292 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:33.802297 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:33.806977 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:20:33.807016 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:33.807029 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:33.807047 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:33.807056 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:33.807065 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:33.807078 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:33 GMT
	I0717 19:20:33.807089 1084713 round_trippers.go:580]     Audit-Id: 9215ba0b-6cef-4293-a1d9-096793b8d45b
	I0717 19:20:33.807270 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 19:20:33.807741 1084713 pod_ready.go:92] pod "kube-scheduler-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:20:33.807767 1084713 pod_ready.go:81] duration metric: took 400.558938ms waiting for pod "kube-scheduler-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:20:33.807783 1084713 pod_ready.go:38] duration metric: took 7.996342081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:20:33.807805 1084713 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:20:33.807872 1084713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:20:33.822865 1084713 command_runner.go:130] > 1071
	I0717 19:20:33.822949 1084713 api_server.go:72] duration metric: took 9.192273439s to wait for apiserver process to appear ...
	I0717 19:20:33.822963 1084713 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:20:33.822986 1084713 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0717 19:20:33.829241 1084713 api_server.go:279] https://192.168.39.174:8443/healthz returned 200:
	ok
	I0717 19:20:33.829333 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/version
	I0717 19:20:33.829341 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:33.829349 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:33.829355 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:33.830742 1084713 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:20:33.830776 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:33.830788 1084713 round_trippers.go:580]     Audit-Id: e7bf6a8e-3cab-40f1-bfe3-041763fa52df
	I0717 19:20:33.830798 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:33.830811 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:33.830820 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:33.830828 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:33.830835 1084713 round_trippers.go:580]     Content-Length: 263
	I0717 19:20:33.830840 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:33 GMT
	I0717 19:20:33.830898 1084713 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0717 19:20:33.830961 1084713 api_server.go:141] control plane version: v1.27.3
	I0717 19:20:33.830983 1084713 api_server.go:131] duration metric: took 8.012345ms to wait for apiserver health ...
	I0717 19:20:33.830993 1084713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:20:34.001362 1084713 request.go:628] Waited for 170.277377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:20:34.001459 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:20:34.001466 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:34.001477 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:34.001487 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:34.007052 1084713 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 19:20:34.007087 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:34.007099 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:34.007108 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:33 GMT
	I0717 19:20:34.007117 1084713 round_trippers.go:580]     Audit-Id: fed2672e-fa5e-4008-b8e6-22b3c1d93993
	I0717 19:20:34.007124 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:34.007131 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:34.007138 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:34.009890 1084713 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"873","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81881 chars]
	I0717 19:20:34.013482 1084713 system_pods.go:59] 12 kube-system pods found
	I0717 19:20:34.013516 1084713 system_pods.go:61] "coredns-5d78c9869d-wqj4s" [a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991] Running
	I0717 19:20:34.013521 1084713 system_pods.go:61] "etcd-multinode-464644" [b672d395-d32d-4198-b486-d9cff48d8b9a] Running
	I0717 19:20:34.013525 1084713 system_pods.go:61] "kindnet-2tp5c" [4e4881b0-4a20-4588-a87b-d2ba9c9b6939] Running
	I0717 19:20:34.013532 1084713 system_pods.go:61] "kindnet-t77xh" [94cb9b0b-58b4-45cc-b6f1-1ca459aed7bc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 19:20:34.013540 1084713 system_pods.go:61] "kindnet-znndf" [94e12556-bc64-4780-b11d-5f8009f953c0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 19:20:34.013547 1084713 system_pods.go:61] "kube-apiserver-multinode-464644" [dd6e14e2-0b92-42b9-b6a2-1562c2c70903] Running
	I0717 19:20:34.013552 1084713 system_pods.go:61] "kube-controller-manager-multinode-464644" [6b598e8b-6c96-4014-b0a2-de37f107a0e9] Running
	I0717 19:20:34.013568 1084713 system_pods.go:61] "kube-proxy-56qvt" [8207802f-ef88-4f7f-871c-bc528ef98b58] Running
	I0717 19:20:34.013576 1084713 system_pods.go:61] "kube-proxy-j6ds6" [439bb5b7-0e46-4762-a9a7-e648a212ad93] Running
	I0717 19:20:34.013580 1084713 system_pods.go:61] "kube-proxy-qwsn5" [50e3f5e0-00d9-4412-b4de-649bc29733e9] Running
	I0717 19:20:34.013585 1084713 system_pods.go:61] "kube-scheduler-multinode-464644" [04e5660d-abb0-432a-861e-c5c242edfb98] Running
	I0717 19:20:34.013589 1084713 system_pods.go:61] "storage-provisioner" [bd46cf29-49d3-4c0a-908e-a323a525d8d5] Running
	I0717 19:20:34.013596 1084713 system_pods.go:74] duration metric: took 182.595983ms to wait for pod list to return data ...
	I0717 19:20:34.013606 1084713 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:20:34.202130 1084713 request.go:628] Waited for 188.424114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/default/serviceaccounts
	I0717 19:20:34.202210 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/default/serviceaccounts
	I0717 19:20:34.202215 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:34.202224 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:34.202230 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:34.205487 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:20:34.205527 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:34.205539 1084713 round_trippers.go:580]     Content-Length: 261
	I0717 19:20:34.205548 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:34 GMT
	I0717 19:20:34.205574 1084713 round_trippers.go:580]     Audit-Id: 33b265e4-0509-4b4c-bcde-c4261e0b73cf
	I0717 19:20:34.205584 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:34.205593 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:34.205609 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:34.205622 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:34.205651 1084713 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c937d5c3-8099-4596-ba93-f29feec4671e","resourceVersion":"341","creationTimestamp":"2023-07-17T19:10:07Z"}}]}
	I0717 19:20:34.205867 1084713 default_sa.go:45] found service account: "default"
	I0717 19:20:34.205882 1084713 default_sa.go:55] duration metric: took 192.270495ms for default service account to be created ...
	I0717 19:20:34.205892 1084713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:20:34.401305 1084713 request.go:628] Waited for 195.324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:20:34.401367 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:20:34.401372 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:34.401380 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:34.401387 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:34.406210 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:20:34.406237 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:34.406245 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:34.406251 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:34.406256 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:34 GMT
	I0717 19:20:34.406261 1084713 round_trippers.go:580]     Audit-Id: b73dc8ea-f324-46f5-9120-421bb1006873
	I0717 19:20:34.406267 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:34.406272 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:34.407121 1084713 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"873","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81881 chars]
	I0717 19:20:34.410068 1084713 system_pods.go:86] 12 kube-system pods found
	I0717 19:20:34.410098 1084713 system_pods.go:89] "coredns-5d78c9869d-wqj4s" [a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991] Running
	I0717 19:20:34.410103 1084713 system_pods.go:89] "etcd-multinode-464644" [b672d395-d32d-4198-b486-d9cff48d8b9a] Running
	I0717 19:20:34.410108 1084713 system_pods.go:89] "kindnet-2tp5c" [4e4881b0-4a20-4588-a87b-d2ba9c9b6939] Running
	I0717 19:20:34.410116 1084713 system_pods.go:89] "kindnet-t77xh" [94cb9b0b-58b4-45cc-b6f1-1ca459aed7bc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 19:20:34.410124 1084713 system_pods.go:89] "kindnet-znndf" [94e12556-bc64-4780-b11d-5f8009f953c0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 19:20:34.410129 1084713 system_pods.go:89] "kube-apiserver-multinode-464644" [dd6e14e2-0b92-42b9-b6a2-1562c2c70903] Running
	I0717 19:20:34.410135 1084713 system_pods.go:89] "kube-controller-manager-multinode-464644" [6b598e8b-6c96-4014-b0a2-de37f107a0e9] Running
	I0717 19:20:34.410139 1084713 system_pods.go:89] "kube-proxy-56qvt" [8207802f-ef88-4f7f-871c-bc528ef98b58] Running
	I0717 19:20:34.410143 1084713 system_pods.go:89] "kube-proxy-j6ds6" [439bb5b7-0e46-4762-a9a7-e648a212ad93] Running
	I0717 19:20:34.410147 1084713 system_pods.go:89] "kube-proxy-qwsn5" [50e3f5e0-00d9-4412-b4de-649bc29733e9] Running
	I0717 19:20:34.410151 1084713 system_pods.go:89] "kube-scheduler-multinode-464644" [04e5660d-abb0-432a-861e-c5c242edfb98] Running
	I0717 19:20:34.410158 1084713 system_pods.go:89] "storage-provisioner" [bd46cf29-49d3-4c0a-908e-a323a525d8d5] Running
	I0717 19:20:34.410164 1084713 system_pods.go:126] duration metric: took 204.268331ms to wait for k8s-apps to be running ...
	I0717 19:20:34.410176 1084713 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:20:34.410224 1084713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:20:34.426192 1084713 system_svc.go:56] duration metric: took 15.999429ms WaitForService to wait for kubelet.
	I0717 19:20:34.426231 1084713 kubeadm.go:581] duration metric: took 9.795571503s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 19:20:34.426254 1084713 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:20:34.601729 1084713 request.go:628] Waited for 175.384742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes
	I0717 19:20:34.601828 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes
	I0717 19:20:34.601835 1084713 round_trippers.go:469] Request Headers:
	I0717 19:20:34.601846 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:20:34.601858 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:20:34.606418 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:20:34.606455 1084713 round_trippers.go:577] Response Headers:
	I0717 19:20:34.606463 1084713 round_trippers.go:580]     Audit-Id: 2c40a826-1437-4056-ab6e-79a8946f2fbf
	I0717 19:20:34.606469 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:20:34.606474 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:20:34.606480 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:20:34.606486 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:20:34.606491 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:20:34 GMT
	I0717 19:20:34.606698 1084713 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"865","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15075 chars]
	I0717 19:20:34.607335 1084713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:20:34.607356 1084713 node_conditions.go:123] node cpu capacity is 2
	I0717 19:20:34.607366 1084713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:20:34.607370 1084713 node_conditions.go:123] node cpu capacity is 2
	I0717 19:20:34.607373 1084713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:20:34.607377 1084713 node_conditions.go:123] node cpu capacity is 2
	I0717 19:20:34.607382 1084713 node_conditions.go:105] duration metric: took 181.122317ms to run NodePressure ...
	I0717 19:20:34.607397 1084713 start.go:228] waiting for startup goroutines ...
	I0717 19:20:34.607408 1084713 start.go:233] waiting for cluster config update ...
	I0717 19:20:34.607418 1084713 start.go:242] writing updated cluster config ...
	I0717 19:20:34.607963 1084713 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:20:34.608061 1084713 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/config.json ...
	I0717 19:20:34.612872 1084713 out.go:177] * Starting worker node multinode-464644-m02 in cluster multinode-464644
	I0717 19:20:34.614847 1084713 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:20:34.614889 1084713 cache.go:57] Caching tarball of preloaded images
	I0717 19:20:34.615029 1084713 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:20:34.615042 1084713 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:20:34.615184 1084713 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/config.json ...
	I0717 19:20:34.615385 1084713 start.go:365] acquiring machines lock for multinode-464644-m02: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:20:34.615465 1084713 start.go:369] acquired machines lock for "multinode-464644-m02" in 49.247µs
	I0717 19:20:34.615530 1084713 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:20:34.615542 1084713 fix.go:54] fixHost starting: m02
	I0717 19:20:34.615941 1084713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:20:34.615969 1084713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:20:34.631418 1084713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
	I0717 19:20:34.631916 1084713 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:20:34.632466 1084713 main.go:141] libmachine: Using API Version  1
	I0717 19:20:34.632491 1084713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:20:34.632905 1084713 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:20:34.633099 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:20:34.633304 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetState
	I0717 19:20:34.635093 1084713 fix.go:102] recreateIfNeeded on multinode-464644-m02: state=Running err=<nil>
	W0717 19:20:34.635113 1084713 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:20:34.637686 1084713 out.go:177] * Updating the running kvm2 "multinode-464644-m02" VM ...
	I0717 19:20:34.639524 1084713 machine.go:88] provisioning docker machine ...
	I0717 19:20:34.639554 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:20:34.639898 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetMachineName
	I0717 19:20:34.640130 1084713 buildroot.go:166] provisioning hostname "multinode-464644-m02"
	I0717 19:20:34.640159 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetMachineName
	I0717 19:20:34.640346 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:20:34.642937 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:20:34.643473 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:20:34.643511 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:20:34.643666 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:20:34.643893 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:20:34.644107 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:20:34.644271 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:20:34.644448 1084713 main.go:141] libmachine: Using SSH client type: native
	I0717 19:20:34.644852 1084713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:20:34.644866 1084713 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-464644-m02 && echo "multinode-464644-m02" | sudo tee /etc/hostname
	I0717 19:20:34.783609 1084713 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-464644-m02
	
	I0717 19:20:34.783646 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:20:34.787123 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:20:34.787548 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:20:34.787580 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:20:34.787940 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:20:34.788252 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:20:34.788547 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:20:34.788779 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:20:34.789018 1084713 main.go:141] libmachine: Using SSH client type: native
	I0717 19:20:34.789666 1084713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:20:34.789693 1084713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-464644-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-464644-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-464644-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:20:34.907161 1084713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:20:34.907200 1084713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:20:34.907226 1084713 buildroot.go:174] setting up certificates
	I0717 19:20:34.907237 1084713 provision.go:83] configureAuth start
	I0717 19:20:34.907250 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetMachineName
	I0717 19:20:34.907611 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetIP
	I0717 19:20:34.910781 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:20:34.911183 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:20:34.911222 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:20:34.911533 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:20:34.914051 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:20:34.914561 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:20:34.914590 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:20:34.914778 1084713 provision.go:138] copyHostCerts
	I0717 19:20:34.914822 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:20:34.914893 1084713 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:20:34.914913 1084713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:20:34.915007 1084713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:20:34.915106 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:20:34.915136 1084713 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:20:34.915146 1084713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:20:34.915185 1084713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:20:34.915252 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:20:34.915277 1084713 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:20:34.915286 1084713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:20:34.915321 1084713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:20:34.915420 1084713 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.multinode-464644-m02 san=[192.168.39.49 192.168.39.49 localhost 127.0.0.1 minikube multinode-464644-m02]
	I0717 19:20:34.970877 1084713 provision.go:172] copyRemoteCerts
	I0717 19:20:34.970942 1084713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:20:34.970974 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:20:34.974499 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:20:34.974998 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:20:34.975041 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:20:34.975306 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:20:34.975534 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:20:34.975748 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:20:34.975960 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa Username:docker}
	I0717 19:20:35.064160 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 19:20:35.064264 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:20:35.090078 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 19:20:35.090188 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0717 19:20:35.116183 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 19:20:35.116275 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:20:35.142692 1084713 provision.go:86] duration metric: configureAuth took 235.436679ms
	I0717 19:20:35.142732 1084713 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:20:35.143041 1084713 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:20:35.143133 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:20:35.146146 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:20:35.146592 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:20:35.146629 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:20:35.146832 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:20:35.147064 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:20:35.147247 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:20:35.147408 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:20:35.147575 1084713 main.go:141] libmachine: Using SSH client type: native
	I0717 19:20:35.148106 1084713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:20:35.148124 1084713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:22:05.677491 1084713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:22:05.677532 1084713 machine.go:91] provisioned docker machine in 1m31.037988928s
	I0717 19:22:05.677548 1084713 start.go:300] post-start starting for "multinode-464644-m02" (driver="kvm2")
	I0717 19:22:05.677583 1084713 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:22:05.677662 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:22:05.678079 1084713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:22:05.678128 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:22:05.681896 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:22:05.682431 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:22:05.682463 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:22:05.682703 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:22:05.682928 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:22:05.683162 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:22:05.683357 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa Username:docker}
	I0717 19:22:05.777750 1084713 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:22:05.782081 1084713 command_runner.go:130] > NAME=Buildroot
	I0717 19:22:05.782109 1084713 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0717 19:22:05.782113 1084713 command_runner.go:130] > ID=buildroot
	I0717 19:22:05.782119 1084713 command_runner.go:130] > VERSION_ID=2021.02.12
	I0717 19:22:05.782124 1084713 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0717 19:22:05.782227 1084713 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:22:05.782259 1084713 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:22:05.782346 1084713 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:22:05.782464 1084713 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:22:05.782481 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> /etc/ssl/certs/10689542.pem
	I0717 19:22:05.782691 1084713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:22:05.792988 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:22:05.816838 1084713 start.go:303] post-start completed in 139.248803ms
	I0717 19:22:05.816877 1084713 fix.go:56] fixHost completed within 1m31.201335658s
	I0717 19:22:05.816906 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:22:05.820100 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:22:05.820506 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:22:05.820556 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:22:05.820790 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:22:05.821072 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:22:05.821228 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:22:05.821370 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:22:05.821541 1084713 main.go:141] libmachine: Using SSH client type: native
	I0717 19:22:05.822008 1084713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:22:05.822023 1084713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:22:05.939044 1084713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689621725.930543050
	
	I0717 19:22:05.939075 1084713 fix.go:206] guest clock: 1689621725.930543050
	I0717 19:22:05.939091 1084713 fix.go:219] Guest: 2023-07-17 19:22:05.93054305 +0000 UTC Remote: 2023-07-17 19:22:05.816883041 +0000 UTC m=+448.800740720 (delta=113.660009ms)
	I0717 19:22:05.939117 1084713 fix.go:190] guest clock delta is within tolerance: 113.660009ms
	I0717 19:22:05.939124 1084713 start.go:83] releasing machines lock for "multinode-464644-m02", held for 1m31.323644367s
	I0717 19:22:05.939159 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:22:05.939544 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetIP
	I0717 19:22:05.942421 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:22:05.942802 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:22:05.942841 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:22:05.945447 1084713 out.go:177] * Found network options:
	I0717 19:22:05.947178 1084713 out.go:177]   - NO_PROXY=192.168.39.174
	W0717 19:22:05.949035 1084713 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 19:22:05.949079 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:22:05.950002 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:22:05.950290 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:22:05.950379 1084713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:22:05.950458 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	W0717 19:22:05.950537 1084713 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 19:22:05.950684 1084713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:22:05.950716 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:22:05.953616 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:22:05.953843 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:22:05.954028 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:22:05.954064 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:22:05.954188 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:22:05.954375 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:22:05.954401 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:22:05.954445 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:22:05.954543 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:22:05.954613 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:22:05.954694 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:22:05.954754 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa Username:docker}
	I0717 19:22:05.954814 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:22:05.954954 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa Username:docker}
	I0717 19:22:06.195299 1084713 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 19:22:06.195385 1084713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:22:06.201960 1084713 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 19:22:06.202015 1084713 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:22:06.202090 1084713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:22:06.212099 1084713 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 19:22:06.212132 1084713 start.go:469] detecting cgroup driver to use...
	I0717 19:22:06.212206 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:22:06.228379 1084713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:22:06.242578 1084713 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:22:06.242655 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:22:06.258103 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:22:06.272780 1084713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:22:06.423288 1084713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:22:06.564025 1084713 docker.go:212] disabling docker service ...
	I0717 19:22:06.564114 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:22:06.579449 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:22:06.593521 1084713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:22:06.712649 1084713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:22:06.832555 1084713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:22:06.845708 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:22:06.864635 1084713 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 19:22:06.864716 1084713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:22:06.864778 1084713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:22:06.875117 1084713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:22:06.875189 1084713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:22:06.886894 1084713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:22:06.897426 1084713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:22:06.908048 1084713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:22:06.918649 1084713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:22:06.928237 1084713 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 19:22:06.928352 1084713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:22:06.939235 1084713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:22:07.063155 1084713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:22:07.309821 1084713 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:22:07.309917 1084713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:22:07.315641 1084713 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 19:22:07.315678 1084713 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 19:22:07.315688 1084713 command_runner.go:130] > Device: 16h/22d	Inode: 1202        Links: 1
	I0717 19:22:07.315700 1084713 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:22:07.315707 1084713 command_runner.go:130] > Access: 2023-07-17 19:22:07.223647507 +0000
	I0717 19:22:07.315718 1084713 command_runner.go:130] > Modify: 2023-07-17 19:22:07.223647507 +0000
	I0717 19:22:07.315726 1084713 command_runner.go:130] > Change: 2023-07-17 19:22:07.223647507 +0000
	I0717 19:22:07.315734 1084713 command_runner.go:130] >  Birth: -
	I0717 19:22:07.315755 1084713 start.go:537] Will wait 60s for crictl version
	I0717 19:22:07.315803 1084713 ssh_runner.go:195] Run: which crictl
	I0717 19:22:07.319579 1084713 command_runner.go:130] > /usr/bin/crictl
	I0717 19:22:07.319866 1084713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:22:07.357761 1084713 command_runner.go:130] > Version:  0.1.0
	I0717 19:22:07.357793 1084713 command_runner.go:130] > RuntimeName:  cri-o
	I0717 19:22:07.357806 1084713 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0717 19:22:07.357815 1084713 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0717 19:22:07.357835 1084713 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:22:07.357897 1084713 ssh_runner.go:195] Run: crio --version
	I0717 19:22:07.414599 1084713 command_runner.go:130] > crio version 1.24.1
	I0717 19:22:07.414624 1084713 command_runner.go:130] > Version:          1.24.1
	I0717 19:22:07.414631 1084713 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 19:22:07.414635 1084713 command_runner.go:130] > GitTreeState:     dirty
	I0717 19:22:07.414648 1084713 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 19:22:07.414654 1084713 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 19:22:07.414658 1084713 command_runner.go:130] > Compiler:         gc
	I0717 19:22:07.414663 1084713 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:22:07.414670 1084713 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:22:07.414677 1084713 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:22:07.414682 1084713 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:22:07.414686 1084713 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:22:07.416157 1084713 ssh_runner.go:195] Run: crio --version
	I0717 19:22:07.472271 1084713 command_runner.go:130] > crio version 1.24.1
	I0717 19:22:07.472300 1084713 command_runner.go:130] > Version:          1.24.1
	I0717 19:22:07.472311 1084713 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 19:22:07.472316 1084713 command_runner.go:130] > GitTreeState:     dirty
	I0717 19:22:07.472325 1084713 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 19:22:07.472332 1084713 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 19:22:07.472348 1084713 command_runner.go:130] > Compiler:         gc
	I0717 19:22:07.472355 1084713 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:22:07.472363 1084713 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:22:07.472379 1084713 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:22:07.472387 1084713 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:22:07.472395 1084713 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:22:07.476644 1084713 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:22:07.478781 1084713 out.go:177]   - env NO_PROXY=192.168.39.174
	I0717 19:22:07.480747 1084713 main.go:141] libmachine: (multinode-464644-m02) Calling .GetIP
	I0717 19:22:07.483830 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:22:07.484230 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:22:07.484269 1084713 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:22:07.484485 1084713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:22:07.490008 1084713 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0717 19:22:07.490086 1084713 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644 for IP: 192.168.39.49
	I0717 19:22:07.490109 1084713 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:22:07.490300 1084713 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:22:07.490362 1084713 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:22:07.490380 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 19:22:07.490400 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 19:22:07.490418 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 19:22:07.490435 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 19:22:07.490550 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:22:07.490599 1084713 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:22:07.490614 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:22:07.490644 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:22:07.490679 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:22:07.490710 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:22:07.490773 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:22:07.490814 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem -> /usr/share/ca-certificates/1068954.pem
	I0717 19:22:07.490834 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> /usr/share/ca-certificates/10689542.pem
	I0717 19:22:07.490852 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:22:07.491351 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:22:07.518841 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:22:07.545419 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:22:07.572302 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:22:07.596324 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:22:07.620634 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:22:07.645175 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:22:07.670181 1084713 ssh_runner.go:195] Run: openssl version
	I0717 19:22:07.676057 1084713 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0717 19:22:07.676239 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:22:07.687868 1084713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:22:07.693215 1084713 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:22:07.693247 1084713 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:22:07.693301 1084713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:22:07.700283 1084713 command_runner.go:130] > 3ec20f2e
	I0717 19:22:07.700362 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:22:07.711423 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:22:07.725325 1084713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:22:07.731518 1084713 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:22:07.731563 1084713 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:22:07.731615 1084713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:22:07.739156 1084713 command_runner.go:130] > b5213941
	I0717 19:22:07.739285 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:22:07.750304 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:22:07.762491 1084713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:22:07.768259 1084713 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:22:07.768302 1084713 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:22:07.768365 1084713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:22:07.774592 1084713 command_runner.go:130] > 51391683
	I0717 19:22:07.774705 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:22:07.784588 1084713 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:22:07.789132 1084713 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 19:22:07.789180 1084713 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 19:22:07.789263 1084713 ssh_runner.go:195] Run: crio config
	I0717 19:22:07.844320 1084713 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 19:22:07.844355 1084713 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 19:22:07.844365 1084713 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 19:22:07.844372 1084713 command_runner.go:130] > #
	I0717 19:22:07.844385 1084713 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 19:22:07.844396 1084713 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 19:22:07.844406 1084713 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 19:22:07.844531 1084713 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 19:22:07.844565 1084713 command_runner.go:130] > # reload'.
	I0717 19:22:07.844576 1084713 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 19:22:07.844588 1084713 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 19:22:07.844601 1084713 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 19:22:07.844611 1084713 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 19:22:07.844619 1084713 command_runner.go:130] > [crio]
	I0717 19:22:07.844629 1084713 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 19:22:07.844641 1084713 command_runner.go:130] > # containers images, in this directory.
	I0717 19:22:07.844648 1084713 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 19:22:07.844667 1084713 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 19:22:07.844678 1084713 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 19:22:07.844689 1084713 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 19:22:07.844702 1084713 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 19:22:07.844715 1084713 command_runner.go:130] > storage_driver = "overlay"
	I0717 19:22:07.844724 1084713 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 19:22:07.844737 1084713 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 19:22:07.844746 1084713 command_runner.go:130] > storage_option = [
	I0717 19:22:07.844754 1084713 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 19:22:07.844762 1084713 command_runner.go:130] > ]
	I0717 19:22:07.844772 1084713 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 19:22:07.844787 1084713 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 19:22:07.844794 1084713 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 19:22:07.844803 1084713 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 19:22:07.844813 1084713 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 19:22:07.844820 1084713 command_runner.go:130] > # always happen on a node reboot
	I0717 19:22:07.844827 1084713 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 19:22:07.844838 1084713 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 19:22:07.844846 1084713 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 19:22:07.844867 1084713 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 19:22:07.844881 1084713 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 19:22:07.844895 1084713 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 19:22:07.844912 1084713 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 19:22:07.844922 1084713 command_runner.go:130] > # internal_wipe = true
	I0717 19:22:07.844935 1084713 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 19:22:07.844949 1084713 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 19:22:07.844957 1084713 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 19:22:07.844967 1084713 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 19:22:07.844980 1084713 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 19:22:07.844989 1084713 command_runner.go:130] > [crio.api]
	I0717 19:22:07.844998 1084713 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 19:22:07.845009 1084713 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 19:22:07.845017 1084713 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 19:22:07.845027 1084713 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 19:22:07.845037 1084713 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 19:22:07.845050 1084713 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 19:22:07.845059 1084713 command_runner.go:130] > # stream_port = "0"
	I0717 19:22:07.845073 1084713 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 19:22:07.845084 1084713 command_runner.go:130] > # stream_enable_tls = false
	I0717 19:22:07.845099 1084713 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 19:22:07.845111 1084713 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 19:22:07.845122 1084713 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 19:22:07.845134 1084713 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 19:22:07.845141 1084713 command_runner.go:130] > # minutes.
	I0717 19:22:07.845148 1084713 command_runner.go:130] > # stream_tls_cert = ""
	I0717 19:22:07.845161 1084713 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 19:22:07.845175 1084713 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 19:22:07.845225 1084713 command_runner.go:130] > # stream_tls_key = ""
	I0717 19:22:07.845241 1084713 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 19:22:07.845253 1084713 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 19:22:07.845262 1084713 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 19:22:07.845270 1084713 command_runner.go:130] > # stream_tls_ca = ""
	I0717 19:22:07.845287 1084713 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:22:07.845299 1084713 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 19:22:07.845314 1084713 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:22:07.845325 1084713 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 19:22:07.845346 1084713 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 19:22:07.845360 1084713 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 19:22:07.845371 1084713 command_runner.go:130] > [crio.runtime]
	I0717 19:22:07.845382 1084713 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 19:22:07.845395 1084713 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 19:22:07.845405 1084713 command_runner.go:130] > # "nofile=1024:2048"
	I0717 19:22:07.845415 1084713 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 19:22:07.845424 1084713 command_runner.go:130] > # default_ulimits = [
	I0717 19:22:07.845427 1084713 command_runner.go:130] > # ]
	I0717 19:22:07.845437 1084713 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 19:22:07.845442 1084713 command_runner.go:130] > # no_pivot = false
	I0717 19:22:07.845448 1084713 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 19:22:07.845454 1084713 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 19:22:07.845461 1084713 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 19:22:07.845467 1084713 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 19:22:07.845477 1084713 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 19:22:07.845490 1084713 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:22:07.845507 1084713 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 19:22:07.845518 1084713 command_runner.go:130] > # Cgroup setting for conmon
	I0717 19:22:07.845534 1084713 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 19:22:07.845546 1084713 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 19:22:07.845578 1084713 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 19:22:07.845592 1084713 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 19:22:07.845603 1084713 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:22:07.845613 1084713 command_runner.go:130] > conmon_env = [
	I0717 19:22:07.845624 1084713 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 19:22:07.845633 1084713 command_runner.go:130] > ]
	I0717 19:22:07.845643 1084713 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 19:22:07.845654 1084713 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 19:22:07.845667 1084713 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 19:22:07.845677 1084713 command_runner.go:130] > # default_env = [
	I0717 19:22:07.845684 1084713 command_runner.go:130] > # ]
	I0717 19:22:07.845697 1084713 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 19:22:07.845704 1084713 command_runner.go:130] > # selinux = false
	I0717 19:22:07.845718 1084713 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 19:22:07.845730 1084713 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 19:22:07.845739 1084713 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 19:22:07.845748 1084713 command_runner.go:130] > # seccomp_profile = ""
	I0717 19:22:07.845761 1084713 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 19:22:07.845775 1084713 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 19:22:07.845789 1084713 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 19:22:07.845797 1084713 command_runner.go:130] > # which might increase security.
	I0717 19:22:07.845805 1084713 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 19:22:07.845816 1084713 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 19:22:07.845831 1084713 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 19:22:07.845846 1084713 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 19:22:07.845857 1084713 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 19:22:07.845870 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:22:07.845878 1084713 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 19:22:07.845891 1084713 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 19:22:07.845899 1084713 command_runner.go:130] > # the cgroup blockio controller.
	I0717 19:22:07.845907 1084713 command_runner.go:130] > # blockio_config_file = ""
	I0717 19:22:07.845921 1084713 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 19:22:07.845931 1084713 command_runner.go:130] > # irqbalance daemon.
	I0717 19:22:07.845941 1084713 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 19:22:07.845952 1084713 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 19:22:07.845966 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:22:07.845974 1084713 command_runner.go:130] > # rdt_config_file = ""
	I0717 19:22:07.845983 1084713 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 19:22:07.845990 1084713 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 19:22:07.846005 1084713 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 19:22:07.846016 1084713 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 19:22:07.846027 1084713 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 19:22:07.846041 1084713 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 19:22:07.846050 1084713 command_runner.go:130] > # will be added.
	I0717 19:22:07.846058 1084713 command_runner.go:130] > # default_capabilities = [
	I0717 19:22:07.846068 1084713 command_runner.go:130] > # 	"CHOWN",
	I0717 19:22:07.846075 1084713 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 19:22:07.846084 1084713 command_runner.go:130] > # 	"FSETID",
	I0717 19:22:07.846090 1084713 command_runner.go:130] > # 	"FOWNER",
	I0717 19:22:07.846099 1084713 command_runner.go:130] > # 	"SETGID",
	I0717 19:22:07.846108 1084713 command_runner.go:130] > # 	"SETUID",
	I0717 19:22:07.846146 1084713 command_runner.go:130] > # 	"SETPCAP",
	I0717 19:22:07.846158 1084713 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 19:22:07.846164 1084713 command_runner.go:130] > # 	"KILL",
	I0717 19:22:07.846170 1084713 command_runner.go:130] > # ]
	I0717 19:22:07.846185 1084713 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 19:22:07.846200 1084713 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:22:07.846210 1084713 command_runner.go:130] > # default_sysctls = [
	I0717 19:22:07.846217 1084713 command_runner.go:130] > # ]
	I0717 19:22:07.846228 1084713 command_runner.go:130] > # List of devices on the host that a
	I0717 19:22:07.846239 1084713 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 19:22:07.846250 1084713 command_runner.go:130] > # allowed_devices = [
	I0717 19:22:07.846256 1084713 command_runner.go:130] > # 	"/dev/fuse",
	I0717 19:22:07.846263 1084713 command_runner.go:130] > # ]
	I0717 19:22:07.846271 1084713 command_runner.go:130] > # List of additional devices. specified as
	I0717 19:22:07.846286 1084713 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 19:22:07.846294 1084713 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 19:22:07.846324 1084713 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:22:07.846336 1084713 command_runner.go:130] > # additional_devices = [
	I0717 19:22:07.846342 1084713 command_runner.go:130] > # ]
	I0717 19:22:07.846355 1084713 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 19:22:07.846377 1084713 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 19:22:07.846387 1084713 command_runner.go:130] > # 	"/etc/cdi",
	I0717 19:22:07.846394 1084713 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 19:22:07.846400 1084713 command_runner.go:130] > # ]
	I0717 19:22:07.846410 1084713 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 19:22:07.846425 1084713 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 19:22:07.846432 1084713 command_runner.go:130] > # Defaults to false.
	I0717 19:22:07.846442 1084713 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 19:22:07.846457 1084713 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 19:22:07.846467 1084713 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 19:22:07.846477 1084713 command_runner.go:130] > # hooks_dir = [
	I0717 19:22:07.846485 1084713 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 19:22:07.846501 1084713 command_runner.go:130] > # ]
	I0717 19:22:07.846513 1084713 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 19:22:07.846527 1084713 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 19:22:07.846540 1084713 command_runner.go:130] > # its default mounts from the following two files:
	I0717 19:22:07.846549 1084713 command_runner.go:130] > #
	I0717 19:22:07.846562 1084713 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 19:22:07.846577 1084713 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 19:22:07.846591 1084713 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 19:22:07.846597 1084713 command_runner.go:130] > #
	I0717 19:22:07.846611 1084713 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 19:22:07.846625 1084713 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 19:22:07.846639 1084713 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 19:22:07.846650 1084713 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 19:22:07.846654 1084713 command_runner.go:130] > #
	I0717 19:22:07.846662 1084713 command_runner.go:130] > # default_mounts_file = ""
	I0717 19:22:07.846671 1084713 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 19:22:07.846686 1084713 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 19:22:07.846697 1084713 command_runner.go:130] > pids_limit = 1024
	I0717 19:22:07.846707 1084713 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 19:22:07.846721 1084713 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 19:22:07.846735 1084713 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 19:22:07.846750 1084713 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 19:22:07.846760 1084713 command_runner.go:130] > # log_size_max = -1
	I0717 19:22:07.846773 1084713 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 19:22:07.846785 1084713 command_runner.go:130] > # log_to_journald = false
	I0717 19:22:07.846796 1084713 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 19:22:07.846809 1084713 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 19:22:07.846818 1084713 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 19:22:07.846831 1084713 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 19:22:07.846844 1084713 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 19:22:07.846852 1084713 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 19:22:07.846864 1084713 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 19:22:07.846874 1084713 command_runner.go:130] > # read_only = false
	I0717 19:22:07.846885 1084713 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 19:22:07.846899 1084713 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 19:22:07.846910 1084713 command_runner.go:130] > # live configuration reload.
	I0717 19:22:07.846917 1084713 command_runner.go:130] > # log_level = "info"
	I0717 19:22:07.846928 1084713 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 19:22:07.846940 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:22:07.846950 1084713 command_runner.go:130] > # log_filter = ""
	I0717 19:22:07.846960 1084713 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 19:22:07.846974 1084713 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 19:22:07.846988 1084713 command_runner.go:130] > # separated by comma.
	I0717 19:22:07.846992 1084713 command_runner.go:130] > # uid_mappings = ""
	I0717 19:22:07.847006 1084713 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 19:22:07.847019 1084713 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 19:22:07.847027 1084713 command_runner.go:130] > # separated by comma.
	I0717 19:22:07.847037 1084713 command_runner.go:130] > # gid_mappings = ""
	I0717 19:22:07.847048 1084713 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 19:22:07.847062 1084713 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:22:07.847075 1084713 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:22:07.847087 1084713 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 19:22:07.847098 1084713 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 19:22:07.847133 1084713 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:22:07.847148 1084713 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:22:07.847156 1084713 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 19:22:07.847166 1084713 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 19:22:07.847176 1084713 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 19:22:07.847190 1084713 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 19:22:07.847201 1084713 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 19:22:07.847213 1084713 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 19:22:07.847227 1084713 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 19:22:07.847239 1084713 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 19:22:07.847250 1084713 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 19:22:07.847260 1084713 command_runner.go:130] > drop_infra_ctr = false
	I0717 19:22:07.847272 1084713 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 19:22:07.847282 1084713 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 19:22:07.847297 1084713 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 19:22:07.847308 1084713 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 19:22:07.847319 1084713 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 19:22:07.847331 1084713 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 19:22:07.847342 1084713 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 19:22:07.847357 1084713 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 19:22:07.847367 1084713 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 19:22:07.847380 1084713 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 19:22:07.847394 1084713 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 19:22:07.847408 1084713 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 19:22:07.847419 1084713 command_runner.go:130] > # default_runtime = "runc"
	I0717 19:22:07.847432 1084713 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 19:22:07.847447 1084713 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 19:22:07.847467 1084713 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 19:22:07.847478 1084713 command_runner.go:130] > # creation as a file is not desired either.
	I0717 19:22:07.847492 1084713 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 19:22:07.847510 1084713 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 19:22:07.847520 1084713 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 19:22:07.847526 1084713 command_runner.go:130] > # ]
	I0717 19:22:07.847534 1084713 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 19:22:07.847549 1084713 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 19:22:07.847563 1084713 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 19:22:07.847576 1084713 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 19:22:07.847582 1084713 command_runner.go:130] > #
	I0717 19:22:07.847592 1084713 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 19:22:07.847603 1084713 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 19:22:07.847613 1084713 command_runner.go:130] > #  runtime_type = "oci"
	I0717 19:22:07.847622 1084713 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 19:22:07.847634 1084713 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 19:22:07.847646 1084713 command_runner.go:130] > #  allowed_annotations = []
	I0717 19:22:07.847655 1084713 command_runner.go:130] > # Where:
	I0717 19:22:07.847668 1084713 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 19:22:07.847681 1084713 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 19:22:07.847695 1084713 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 19:22:07.847707 1084713 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 19:22:07.847717 1084713 command_runner.go:130] > #   in $PATH.
	I0717 19:22:07.847731 1084713 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 19:22:07.847743 1084713 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 19:22:07.847754 1084713 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 19:22:07.847764 1084713 command_runner.go:130] > #   state.
	I0717 19:22:07.847775 1084713 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 19:22:07.847788 1084713 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 19:22:07.847801 1084713 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 19:22:07.847814 1084713 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 19:22:07.847827 1084713 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 19:22:07.847841 1084713 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 19:22:07.847853 1084713 command_runner.go:130] > #   The currently recognized values are:
	I0717 19:22:07.847867 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 19:22:07.847889 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 19:22:07.847903 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 19:22:07.847917 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 19:22:07.847930 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 19:22:07.847944 1084713 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 19:22:07.847981 1084713 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 19:22:07.847996 1084713 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 19:22:07.848008 1084713 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 19:22:07.848019 1084713 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 19:22:07.848030 1084713 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 19:22:07.848040 1084713 command_runner.go:130] > runtime_type = "oci"
	I0717 19:22:07.848051 1084713 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 19:22:07.848062 1084713 command_runner.go:130] > runtime_config_path = ""
	I0717 19:22:07.848072 1084713 command_runner.go:130] > monitor_path = ""
	I0717 19:22:07.848079 1084713 command_runner.go:130] > monitor_cgroup = ""
	I0717 19:22:07.848089 1084713 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 19:22:07.848100 1084713 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 19:22:07.848113 1084713 command_runner.go:130] > # running containers
	I0717 19:22:07.848124 1084713 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 19:22:07.848138 1084713 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 19:22:07.848176 1084713 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 19:22:07.848190 1084713 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 19:22:07.848200 1084713 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 19:22:07.848212 1084713 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 19:22:07.848220 1084713 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 19:22:07.848244 1084713 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 19:22:07.848255 1084713 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 19:22:07.848261 1084713 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 19:22:07.848274 1084713 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 19:22:07.848286 1084713 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 19:22:07.848300 1084713 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 19:22:07.848316 1084713 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 19:22:07.848332 1084713 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 19:22:07.848344 1084713 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 19:22:07.848357 1084713 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 19:22:07.848374 1084713 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 19:22:07.848387 1084713 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 19:22:07.848403 1084713 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 19:22:07.848413 1084713 command_runner.go:130] > # Example:
	I0717 19:22:07.848421 1084713 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 19:22:07.848432 1084713 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 19:22:07.848444 1084713 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 19:22:07.848457 1084713 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 19:22:07.848466 1084713 command_runner.go:130] > # cpuset = 0
	I0717 19:22:07.848473 1084713 command_runner.go:130] > # cpushares = "0-1"
	I0717 19:22:07.848482 1084713 command_runner.go:130] > # Where:
	I0717 19:22:07.848490 1084713 command_runner.go:130] > # The workload name is workload-type.
	I0717 19:22:07.848507 1084713 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 19:22:07.848515 1084713 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 19:22:07.848521 1084713 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 19:22:07.848534 1084713 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 19:22:07.848548 1084713 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 19:22:07.848554 1084713 command_runner.go:130] > # 
	I0717 19:22:07.848570 1084713 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 19:22:07.848578 1084713 command_runner.go:130] > #
	I0717 19:22:07.848588 1084713 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 19:22:07.848601 1084713 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 19:22:07.848614 1084713 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 19:22:07.848624 1084713 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 19:22:07.848637 1084713 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 19:22:07.848648 1084713 command_runner.go:130] > [crio.image]
	I0717 19:22:07.848659 1084713 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 19:22:07.848670 1084713 command_runner.go:130] > # default_transport = "docker://"
	I0717 19:22:07.848683 1084713 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 19:22:07.848697 1084713 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:22:07.848707 1084713 command_runner.go:130] > # global_auth_file = ""
	I0717 19:22:07.848714 1084713 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 19:22:07.848723 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:22:07.848731 1084713 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 19:22:07.848746 1084713 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 19:22:07.848759 1084713 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:22:07.848771 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:22:07.848781 1084713 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 19:22:07.848791 1084713 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 19:22:07.848803 1084713 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 19:22:07.848812 1084713 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 19:22:07.848822 1084713 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 19:22:07.848863 1084713 command_runner.go:130] > # pause_command = "/pause"
	I0717 19:22:07.848877 1084713 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 19:22:07.848891 1084713 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 19:22:07.848903 1084713 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 19:22:07.848912 1084713 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 19:22:07.848922 1084713 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 19:22:07.848933 1084713 command_runner.go:130] > # signature_policy = ""
	I0717 19:22:07.848946 1084713 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 19:22:07.848960 1084713 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 19:22:07.848970 1084713 command_runner.go:130] > # changing them here.
	I0717 19:22:07.848980 1084713 command_runner.go:130] > # insecure_registries = [
	I0717 19:22:07.848986 1084713 command_runner.go:130] > # ]
	I0717 19:22:07.849000 1084713 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 19:22:07.849008 1084713 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 19:22:07.849015 1084713 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 19:22:07.849028 1084713 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 19:22:07.849038 1084713 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 19:22:07.849047 1084713 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 19:22:07.849056 1084713 command_runner.go:130] > # CNI plugins.
	I0717 19:22:07.849062 1084713 command_runner.go:130] > [crio.network]
	I0717 19:22:07.849075 1084713 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 19:22:07.849086 1084713 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 19:22:07.849097 1084713 command_runner.go:130] > # cni_default_network = ""
	I0717 19:22:07.849114 1084713 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 19:22:07.849124 1084713 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 19:22:07.849135 1084713 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 19:22:07.849145 1084713 command_runner.go:130] > # plugin_dirs = [
	I0717 19:22:07.849153 1084713 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 19:22:07.849158 1084713 command_runner.go:130] > # ]
	I0717 19:22:07.849170 1084713 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 19:22:07.849179 1084713 command_runner.go:130] > [crio.metrics]
	I0717 19:22:07.849188 1084713 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 19:22:07.849197 1084713 command_runner.go:130] > enable_metrics = true
	I0717 19:22:07.849205 1084713 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 19:22:07.849216 1084713 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 19:22:07.849226 1084713 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 19:22:07.849242 1084713 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 19:22:07.849254 1084713 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 19:22:07.849260 1084713 command_runner.go:130] > # metrics_collectors = [
	I0717 19:22:07.849269 1084713 command_runner.go:130] > # 	"operations",
	I0717 19:22:07.849277 1084713 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 19:22:07.849288 1084713 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 19:22:07.849297 1084713 command_runner.go:130] > # 	"operations_errors",
	I0717 19:22:07.849304 1084713 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 19:22:07.849314 1084713 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 19:22:07.849322 1084713 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 19:22:07.849332 1084713 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 19:22:07.849343 1084713 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 19:22:07.849352 1084713 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 19:22:07.849363 1084713 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 19:22:07.849369 1084713 command_runner.go:130] > # 	"containers_oom_total",
	I0717 19:22:07.849377 1084713 command_runner.go:130] > # 	"containers_oom",
	I0717 19:22:07.849382 1084713 command_runner.go:130] > # 	"processes_defunct",
	I0717 19:22:07.849388 1084713 command_runner.go:130] > # 	"operations_total",
	I0717 19:22:07.849393 1084713 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 19:22:07.849399 1084713 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 19:22:07.849404 1084713 command_runner.go:130] > # 	"operations_errors_total",
	I0717 19:22:07.849410 1084713 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 19:22:07.849415 1084713 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 19:22:07.849420 1084713 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 19:22:07.849424 1084713 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 19:22:07.849429 1084713 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 19:22:07.849435 1084713 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 19:22:07.849444 1084713 command_runner.go:130] > # ]
	I0717 19:22:07.849453 1084713 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 19:22:07.849462 1084713 command_runner.go:130] > # metrics_port = 9090
	I0717 19:22:07.849472 1084713 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 19:22:07.849482 1084713 command_runner.go:130] > # metrics_socket = ""
	I0717 19:22:07.849494 1084713 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 19:22:07.849512 1084713 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 19:22:07.849525 1084713 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 19:22:07.849534 1084713 command_runner.go:130] > # certificate on any modification event.
	I0717 19:22:07.849538 1084713 command_runner.go:130] > # metrics_cert = ""
	I0717 19:22:07.849548 1084713 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 19:22:07.849570 1084713 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 19:22:07.849578 1084713 command_runner.go:130] > # metrics_key = ""
	I0717 19:22:07.849593 1084713 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 19:22:07.849603 1084713 command_runner.go:130] > [crio.tracing]
	I0717 19:22:07.849612 1084713 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 19:22:07.849622 1084713 command_runner.go:130] > # enable_tracing = false
	I0717 19:22:07.849631 1084713 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 19:22:07.849641 1084713 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 19:22:07.849647 1084713 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 19:22:07.849657 1084713 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 19:22:07.849694 1084713 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 19:22:07.849704 1084713 command_runner.go:130] > [crio.stats]
	I0717 19:22:07.849715 1084713 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 19:22:07.849728 1084713 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 19:22:07.849739 1084713 command_runner.go:130] > # stats_collection_period = 0
	I0717 19:22:07.850468 1084713 command_runner.go:130] ! time="2023-07-17 19:22:07.833278331Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0717 19:22:07.850500 1084713 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 19:22:07.850577 1084713 cni.go:84] Creating CNI manager for ""
	I0717 19:22:07.850598 1084713 cni.go:137] 3 nodes found, recommending kindnet
	I0717 19:22:07.850608 1084713 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:22:07.850628 1084713 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.49 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-464644 NodeName:multinode-464644-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:22:07.850759 1084713 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-464644-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:22:07.850815 1084713 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-464644-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-464644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:22:07.850870 1084713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:22:07.860916 1084713 command_runner.go:130] > kubeadm
	I0717 19:22:07.860941 1084713 command_runner.go:130] > kubectl
	I0717 19:22:07.860945 1084713 command_runner.go:130] > kubelet
	I0717 19:22:07.860980 1084713 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:22:07.861049 1084713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 19:22:07.871190 1084713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0717 19:22:07.889273 1084713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:22:07.907134 1084713 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0717 19:22:07.911944 1084713 command_runner.go:130] > 192.168.39.174	control-plane.minikube.internal
	I0717 19:22:07.912060 1084713 host.go:66] Checking if "multinode-464644" exists ...
	I0717 19:22:07.912408 1084713 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:22:07.912441 1084713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:22:07.912477 1084713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:22:07.928780 1084713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I0717 19:22:07.929326 1084713 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:22:07.930070 1084713 main.go:141] libmachine: Using API Version  1
	I0717 19:22:07.930105 1084713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:22:07.930541 1084713 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:22:07.930789 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:22:07.930941 1084713 start.go:304] JoinCluster: &{Name:multinode-464644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-464644 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.49 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.247 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false i
stio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0}
	I0717 19:22:07.931071 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 19:22:07.931093 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:22:07.933903 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:22:07.934550 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:22:07.934576 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:22:07.934867 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:22:07.935073 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:22:07.935331 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:22:07.935632 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:22:08.126646 1084713 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ybrob0.09a6f5b83q7wkv5g --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 19:22:08.128723 1084713 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.49 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 19:22:08.128774 1084713 host.go:66] Checking if "multinode-464644" exists ...
	I0717 19:22:08.129102 1084713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:22:08.129133 1084713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:22:08.146027 1084713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37323
	I0717 19:22:08.146584 1084713 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:22:08.147249 1084713 main.go:141] libmachine: Using API Version  1
	I0717 19:22:08.147271 1084713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:22:08.147759 1084713 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:22:08.148010 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:22:08.148331 1084713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-464644-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0717 19:22:08.148372 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:22:08.152096 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:22:08.152802 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:22:08.152849 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:22:08.153115 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:22:08.153368 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:22:08.153609 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:22:08.153941 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:22:08.378830 1084713 command_runner.go:130] > node/multinode-464644-m02 cordoned
	I0717 19:22:11.431623 1084713 command_runner.go:130] > pod "busybox-67b7f59bb-bjpl2" has DeletionTimestamp older than 1 seconds, skipping
	I0717 19:22:11.431653 1084713 command_runner.go:130] > node/multinode-464644-m02 drained
	I0717 19:22:11.433432 1084713 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0717 19:22:11.433467 1084713 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-t77xh, kube-system/kube-proxy-j6ds6
	I0717 19:22:11.433492 1084713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-464644-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.28513228s)
	I0717 19:22:11.433507 1084713 node.go:108] successfully drained node "m02"
	I0717 19:22:11.434035 1084713 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:22:11.434356 1084713 kapi.go:59] client config for multinode-464644: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:22:11.434926 1084713 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0717 19:22:11.435001 1084713 round_trippers.go:463] DELETE https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:22:11.435015 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:11.435024 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:11.435033 1084713 round_trippers.go:473]     Content-Type: application/json
	I0717 19:22:11.435046 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:11.453970 1084713 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0717 19:22:11.454000 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:11.454009 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:11.454015 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:11.454021 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:11.454026 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:11.454032 1084713 round_trippers.go:580]     Content-Length: 171
	I0717 19:22:11.454037 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:11 GMT
	I0717 19:22:11.454042 1084713 round_trippers.go:580]     Audit-Id: af41e516-3398-41b0-bf30-46462caa6c50
	I0717 19:22:11.454091 1084713 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-464644-m02","kind":"nodes","uid":"7808c8a5-eed0-4632-bd7c-dc2a2a06fa78"}}
	I0717 19:22:11.454147 1084713 node.go:124] successfully deleted node "m02"
	I0717 19:22:11.454163 1084713 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.49 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 19:22:11.454195 1084713 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.49 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 19:22:11.454223 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ybrob0.09a6f5b83q7wkv5g --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-464644-m02"
	I0717 19:22:11.510366 1084713 command_runner.go:130] ! W0717 19:22:11.501769    2572 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0717 19:22:11.510395 1084713 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0717 19:22:11.660368 1084713 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0717 19:22:11.660418 1084713 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0717 19:22:12.436401 1084713 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 19:22:12.436437 1084713 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0717 19:22:12.436452 1084713 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0717 19:22:12.436466 1084713 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:22:12.436476 1084713 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:22:12.436481 1084713 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 19:22:12.436487 1084713 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0717 19:22:12.436502 1084713 command_runner.go:130] > This node has joined the cluster:
	I0717 19:22:12.436517 1084713 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0717 19:22:12.436530 1084713 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0717 19:22:12.436541 1084713 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0717 19:22:12.436577 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 19:22:12.735708 1084713 start.go:306] JoinCluster complete in 4.804760017s
	I0717 19:22:12.735749 1084713 cni.go:84] Creating CNI manager for ""
	I0717 19:22:12.735755 1084713 cni.go:137] 3 nodes found, recommending kindnet
	I0717 19:22:12.735822 1084713 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 19:22:12.741988 1084713 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 19:22:12.742023 1084713 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0717 19:22:12.742039 1084713 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0717 19:22:12.742045 1084713 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:22:12.742059 1084713 command_runner.go:130] > Access: 2023-07-17 19:19:47.710331536 +0000
	I0717 19:22:12.742068 1084713 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0717 19:22:12.742077 1084713 command_runner.go:130] > Change: 2023-07-17 19:19:45.751331536 +0000
	I0717 19:22:12.742087 1084713 command_runner.go:130] >  Birth: -
	I0717 19:22:12.742255 1084713 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 19:22:12.742280 1084713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 19:22:12.762345 1084713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 19:22:13.281839 1084713 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0717 19:22:13.281867 1084713 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0717 19:22:13.281873 1084713 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0717 19:22:13.281878 1084713 command_runner.go:130] > daemonset.apps/kindnet configured
	I0717 19:22:13.282333 1084713 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:22:13.282603 1084713 kapi.go:59] client config for multinode-464644: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:22:13.282978 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 19:22:13.282997 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.283005 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.283011 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.286352 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:22:13.286379 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.286388 1084713 round_trippers.go:580]     Audit-Id: 734f9f67-69f6-4a6f-9c32-5eded5d9562b
	I0717 19:22:13.286400 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.286408 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.286420 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.286431 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.286443 1084713 round_trippers.go:580]     Content-Length: 291
	I0717 19:22:13.286452 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.286484 1084713 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"06c3326f-def8-45bf-a91d-f07feefe253d","resourceVersion":"892","creationTimestamp":"2023-07-17T19:09:54Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0717 19:22:13.286604 1084713 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-464644" context rescaled to 1 replicas
	I0717 19:22:13.286641 1084713 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.49 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 19:22:13.289212 1084713 out.go:177] * Verifying Kubernetes components...
	I0717 19:22:13.291238 1084713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:22:13.306210 1084713 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:22:13.306424 1084713 kapi.go:59] client config for multinode-464644: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:22:13.306674 1084713 node_ready.go:35] waiting up to 6m0s for node "multinode-464644-m02" to be "Ready" ...
	I0717 19:22:13.306743 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:22:13.306747 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.306757 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.306766 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.311943 1084713 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 19:22:13.311977 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.311990 1084713 round_trippers.go:580]     Audit-Id: ecdb081a-e73a-44af-aab0-596fddf1dcf1
	I0717 19:22:13.311999 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.312008 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.312016 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.312027 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.312040 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.312463 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"8a7d3b54-fa08-45cf-b8cb-6e947d45ee9a","resourceVersion":"1027","creationTimestamp":"2023-07-17T19:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:22:12Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0717 19:22:13.312796 1084713 node_ready.go:49] node "multinode-464644-m02" has status "Ready":"True"
	I0717 19:22:13.312814 1084713 node_ready.go:38] duration metric: took 6.123806ms waiting for node "multinode-464644-m02" to be "Ready" ...
	I0717 19:22:13.312826 1084713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:22:13.312898 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:22:13.312907 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.312914 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.312921 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.322122 1084713 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 19:22:13.322160 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.322174 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.322182 1084713 round_trippers.go:580]     Audit-Id: 0c86726c-10dc-46a9-9428-a492b160e44a
	I0717 19:22:13.322190 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.322198 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.322205 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.322214 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.323226 1084713 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1035"},"items":[{"metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"873","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82241 chars]
	I0717 19:22:13.326025 1084713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:13.326130 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:22:13.326143 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.326154 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.326162 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.329752 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:22:13.329833 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.329850 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.329858 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.329866 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.329874 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.329883 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.329891 1084713 round_trippers.go:580]     Audit-Id: 12ba4762-7a24-43de-919e-3ab563488563
	I0717 19:22:13.330116 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"873","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0717 19:22:13.330742 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:22:13.330765 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.330776 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.330789 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.337532 1084713 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 19:22:13.337589 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.337602 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.337611 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.337620 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.337629 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.337646 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.337656 1084713 round_trippers.go:580]     Audit-Id: 6c686939-6fb8-4bee-8b09-ff5b3aabf612
	I0717 19:22:13.338931 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 19:22:13.339389 1084713 pod_ready.go:92] pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace has status "Ready":"True"
	I0717 19:22:13.339412 1084713 pod_ready.go:81] duration metric: took 13.361486ms waiting for pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:13.339424 1084713 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:13.339562 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-464644
	I0717 19:22:13.339574 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.339583 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.339589 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.343326 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:22:13.343355 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.343366 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.343376 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.343384 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.343400 1084713 round_trippers.go:580]     Audit-Id: 0b3051e0-71f5-4726-ba32-5d68b6e20f09
	I0717 19:22:13.343412 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.343419 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.343691 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-464644","namespace":"kube-system","uid":"b672d395-d32d-4198-b486-d9cff48d8b9a","resourceVersion":"884","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.mirror":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.seen":"2023-07-17T19:09:54.339578401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0717 19:22:13.344256 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:22:13.344274 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.344286 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.344295 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.348839 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:22:13.348871 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.348881 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.348890 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.348897 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.348904 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.348912 1084713 round_trippers.go:580]     Audit-Id: 0939f11a-3fc1-4881-b69a-749e26eab10e
	I0717 19:22:13.348919 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.349268 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 19:22:13.349795 1084713 pod_ready.go:92] pod "etcd-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:22:13.349817 1084713 pod_ready.go:81] duration metric: took 10.386074ms waiting for pod "etcd-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:13.349842 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:13.349927 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-464644
	I0717 19:22:13.349934 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.349945 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.349960 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.352979 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:22:13.353011 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.353021 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.353029 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.353037 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.353044 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.353051 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.353060 1084713 round_trippers.go:580]     Audit-Id: 89a0d692-ae8f-42f9-a734-325be8923826
	I0717 19:22:13.353310 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-464644","namespace":"kube-system","uid":"dd6e14e2-0b92-42b9-b6a2-1562c2c70903","resourceVersion":"867","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.174:8443","kubernetes.io/config.hash":"b280034e13df00701aec7afc575fcc6c","kubernetes.io/config.mirror":"b280034e13df00701aec7afc575fcc6c","kubernetes.io/config.seen":"2023-07-17T19:09:54.339586957Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0717 19:22:13.353900 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:22:13.353918 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.353929 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.353940 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.359067 1084713 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 19:22:13.359099 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.359111 1084713 round_trippers.go:580]     Audit-Id: 817df63c-6e59-4ea7-bbbe-8dd2552e980b
	I0717 19:22:13.359120 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.359131 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.359140 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.359151 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.359160 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.359892 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 19:22:13.360375 1084713 pod_ready.go:92] pod "kube-apiserver-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:22:13.360400 1084713 pod_ready.go:81] duration metric: took 10.549483ms waiting for pod "kube-apiserver-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:13.360415 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:13.360502 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-464644
	I0717 19:22:13.360511 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.360518 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.360524 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.363815 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:22:13.363842 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.363852 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.363861 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.363869 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.363876 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.363891 1084713 round_trippers.go:580]     Audit-Id: 1b425d19-54f7-4ec0-aab3-5be1c84387d3
	I0717 19:22:13.363899 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.364636 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-464644","namespace":"kube-system","uid":"6b598e8b-6c96-4014-b0a2-de37f107a0e9","resourceVersion":"880","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"323b8f41b30f0969feab8ff61a3ecabd","kubernetes.io/config.mirror":"323b8f41b30f0969feab8ff61a3ecabd","kubernetes.io/config.seen":"2023-07-17T19:09:54.339588566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0717 19:22:13.365070 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:22:13.365090 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.365098 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.365104 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.371482 1084713 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 19:22:13.371512 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.371523 1084713 round_trippers.go:580]     Audit-Id: 990bccd2-9601-404a-9463-cc39762b0be6
	I0717 19:22:13.371531 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.371539 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.371548 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.371556 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.371565 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.371735 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 19:22:13.372116 1084713 pod_ready.go:92] pod "kube-controller-manager-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:22:13.372140 1084713 pod_ready.go:81] duration metric: took 11.711107ms waiting for pod "kube-controller-manager-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:13.372155 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-56qvt" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:13.507652 1084713 request.go:628] Waited for 135.387171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56qvt
	I0717 19:22:13.507780 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56qvt
	I0717 19:22:13.507792 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.507800 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.507806 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.511299 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:22:13.511333 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.511345 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.511354 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.511362 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.511370 1084713 round_trippers.go:580]     Audit-Id: f5d2656d-fcd7-4c87-88b6-40653f6e3740
	I0717 19:22:13.511378 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.511387 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.511512 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-56qvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"8207802f-ef88-4f7f-871c-bc528ef98b58","resourceVersion":"721","creationTimestamp":"2023-07-17T19:11:40Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:11:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0717 19:22:13.707543 1084713 request.go:628] Waited for 195.400245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m03
	I0717 19:22:13.707611 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m03
	I0717 19:22:13.707616 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.707624 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.707630 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.711216 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:22:13.711244 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.711255 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.711263 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.711270 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.711278 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.711285 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.711294 1084713 round_trippers.go:580]     Audit-Id: bf3252e6-61ce-405b-bb71-48bd561e0b11
	I0717 19:22:13.711468 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m03","uid":"78befe00-f3c3-4f9c-86ff-aea572ef1c48","resourceVersion":"887","creationTimestamp":"2023-07-17T19:12:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:12:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0717 19:22:13.711841 1084713 pod_ready.go:92] pod "kube-proxy-56qvt" in "kube-system" namespace has status "Ready":"True"
	I0717 19:22:13.711859 1084713 pod_ready.go:81] duration metric: took 339.690654ms waiting for pod "kube-proxy-56qvt" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:13.711875 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6ds6" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:13.907390 1084713 request.go:628] Waited for 195.419657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6ds6
	I0717 19:22:13.907483 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6ds6
	I0717 19:22:13.907495 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:13.907505 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:13.907513 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:13.910338 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:22:13.910362 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:13.910369 1084713 round_trippers.go:580]     Audit-Id: 8b784431-a550-4d59-9d57-c99fd8f78f36
	I0717 19:22:13.910375 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:13.910380 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:13.910386 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:13.910391 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:13.910396 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:13 GMT
	I0717 19:22:13.910622 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j6ds6","generateName":"kube-proxy-","namespace":"kube-system","uid":"439bb5b7-0e46-4762-a9a7-e648a212ad93","resourceVersion":"1033","creationTimestamp":"2023-07-17T19:10:52Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0717 19:22:14.107740 1084713 request.go:628] Waited for 196.430631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:22:14.107806 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:22:14.107812 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:14.107820 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:14.107826 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:14.110756 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:22:14.110799 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:14.110810 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:14.110817 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:14 GMT
	I0717 19:22:14.110823 1084713 round_trippers.go:580]     Audit-Id: aa4ac172-2f8f-44ff-98de-2d84e19af60f
	I0717 19:22:14.110828 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:14.110833 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:14.110839 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:14.110941 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"8a7d3b54-fa08-45cf-b8cb-6e947d45ee9a","resourceVersion":"1027","creationTimestamp":"2023-07-17T19:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:22:12Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0717 19:22:14.612081 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6ds6
	I0717 19:22:14.612111 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:14.612119 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:14.612126 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:14.617303 1084713 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 19:22:14.617342 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:14.617355 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:14.617365 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:14.617375 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:14.617384 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:14 GMT
	I0717 19:22:14.617392 1084713 round_trippers.go:580]     Audit-Id: c7bf3ff3-384a-4c4d-9b94-a01322a18afa
	I0717 19:22:14.617404 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:14.617808 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j6ds6","generateName":"kube-proxy-","namespace":"kube-system","uid":"439bb5b7-0e46-4762-a9a7-e648a212ad93","resourceVersion":"1043","creationTimestamp":"2023-07-17T19:10:52Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0717 19:22:14.618403 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:22:14.618421 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:14.618430 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:14.618435 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:14.620968 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:22:14.620993 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:14.621004 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:14.621012 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:14.621020 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:14 GMT
	I0717 19:22:14.621028 1084713 round_trippers.go:580]     Audit-Id: 01a0625c-e9bd-41f2-9662-2312daa77849
	I0717 19:22:14.621039 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:14.621047 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:14.621232 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"8a7d3b54-fa08-45cf-b8cb-6e947d45ee9a","resourceVersion":"1027","creationTimestamp":"2023-07-17T19:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:22:12Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0717 19:22:14.621616 1084713 pod_ready.go:92] pod "kube-proxy-j6ds6" in "kube-system" namespace has status "Ready":"True"
	I0717 19:22:14.621639 1084713 pod_ready.go:81] duration metric: took 909.753053ms waiting for pod "kube-proxy-j6ds6" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:14.621654 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qwsn5" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:14.707034 1084713 request.go:628] Waited for 85.290495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qwsn5
	I0717 19:22:14.707106 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qwsn5
	I0717 19:22:14.707110 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:14.707118 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:14.707134 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:14.710622 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:22:14.710662 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:14.710672 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:14.710681 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:14.710689 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:14.710697 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:14.710706 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:14 GMT
	I0717 19:22:14.710714 1084713 round_trippers.go:580]     Audit-Id: c25bf248-4641-4f52-bd5d-92539c5de648
	I0717 19:22:14.711177 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qwsn5","generateName":"kube-proxy-","namespace":"kube-system","uid":"50e3f5e0-00d9-4412-b4de-649bc29733e9","resourceVersion":"776","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 19:22:14.906942 1084713 request.go:628] Waited for 195.313421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:22:14.907043 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:22:14.907052 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:14.907067 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:14.907082 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:14.910295 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:22:14.910322 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:14.910329 1084713 round_trippers.go:580]     Audit-Id: 79168691-5b68-4e12-b044-fc391c137fa1
	I0717 19:22:14.910341 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:14.910349 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:14.910358 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:14.910372 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:14.910381 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:14 GMT
	I0717 19:22:14.910567 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 19:22:14.911046 1084713 pod_ready.go:92] pod "kube-proxy-qwsn5" in "kube-system" namespace has status "Ready":"True"
	I0717 19:22:14.911069 1084713 pod_ready.go:81] duration metric: took 289.407546ms waiting for pod "kube-proxy-qwsn5" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:14.911082 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:15.107515 1084713 request.go:628] Waited for 196.347656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-464644
	I0717 19:22:15.107604 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-464644
	I0717 19:22:15.107619 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:15.107636 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:15.107650 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:15.111090 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:22:15.111122 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:15.111132 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:15 GMT
	I0717 19:22:15.111138 1084713 round_trippers.go:580]     Audit-Id: 0ad6dd3c-ee44-480b-8f14-6dae97f9cf31
	I0717 19:22:15.111145 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:15.111153 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:15.111162 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:15.111171 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:15.111377 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-464644","namespace":"kube-system","uid":"04e5660d-abb0-432a-861e-c5c242edfb98","resourceVersion":"894","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6435a6b37c43f83175753c4199c85407","kubernetes.io/config.mirror":"6435a6b37c43f83175753c4199c85407","kubernetes.io/config.seen":"2023-07-17T19:09:54.339590320Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0717 19:22:15.307243 1084713 request.go:628] Waited for 195.395451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:22:15.307340 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:22:15.307347 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:15.307355 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:15.307362 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:15.310718 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:22:15.310749 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:15.310758 1084713 round_trippers.go:580]     Audit-Id: bd5da8a3-c16b-4d95-9fa1-c8505c9c0335
	I0717 19:22:15.310767 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:15.310774 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:15.310780 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:15.310788 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:15.310795 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:15 GMT
	I0717 19:22:15.311529 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 19:22:15.312179 1084713 pod_ready.go:92] pod "kube-scheduler-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:22:15.312232 1084713 pod_ready.go:81] duration metric: took 401.139854ms waiting for pod "kube-scheduler-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:22:15.312262 1084713 pod_ready.go:38] duration metric: took 1.999426383s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:22:15.312300 1084713 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:22:15.312402 1084713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:22:15.330451 1084713 system_svc.go:56] duration metric: took 18.146701ms WaitForService to wait for kubelet.
	I0717 19:22:15.330492 1084713 kubeadm.go:581] duration metric: took 2.043815406s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 19:22:15.330522 1084713 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:22:15.506979 1084713 request.go:628] Waited for 176.357876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes
	I0717 19:22:15.507061 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes
	I0717 19:22:15.507066 1084713 round_trippers.go:469] Request Headers:
	I0717 19:22:15.507075 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:22:15.507081 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:22:15.510320 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:22:15.510356 1084713 round_trippers.go:577] Response Headers:
	I0717 19:22:15.510369 1084713 round_trippers.go:580]     Audit-Id: b263fe36-e8a9-4391-a1bf-dee4d1cd4810
	I0717 19:22:15.510378 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:22:15.510388 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:22:15.510395 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:22:15.510405 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:22:15.510414 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:22:15 GMT
	I0717 19:22:15.510707 1084713 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1047"},"items":[{"metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15105 chars]
	I0717 19:22:15.511459 1084713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:22:15.511491 1084713 node_conditions.go:123] node cpu capacity is 2
	I0717 19:22:15.511505 1084713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:22:15.511509 1084713 node_conditions.go:123] node cpu capacity is 2
	I0717 19:22:15.511513 1084713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:22:15.511517 1084713 node_conditions.go:123] node cpu capacity is 2
	I0717 19:22:15.511521 1084713 node_conditions.go:105] duration metric: took 180.994617ms to run NodePressure ...
	I0717 19:22:15.511532 1084713 start.go:228] waiting for startup goroutines ...
	I0717 19:22:15.511584 1084713 start.go:242] writing updated cluster config ...
	I0717 19:22:15.512089 1084713 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:22:15.512181 1084713 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/config.json ...
	I0717 19:22:15.516473 1084713 out.go:177] * Starting worker node multinode-464644-m03 in cluster multinode-464644
	I0717 19:22:15.518193 1084713 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:22:15.518237 1084713 cache.go:57] Caching tarball of preloaded images
	I0717 19:22:15.518368 1084713 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:22:15.518384 1084713 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:22:15.518521 1084713 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/config.json ...
	I0717 19:22:15.518741 1084713 start.go:365] acquiring machines lock for multinode-464644-m03: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:22:15.518803 1084713 start.go:369] acquired machines lock for "multinode-464644-m03" in 36.405µs
	I0717 19:22:15.518829 1084713 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:22:15.518836 1084713 fix.go:54] fixHost starting: m03
	I0717 19:22:15.519115 1084713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:22:15.519165 1084713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:22:15.534976 1084713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44707
	I0717 19:22:15.535462 1084713 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:22:15.536073 1084713 main.go:141] libmachine: Using API Version  1
	I0717 19:22:15.536103 1084713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:22:15.536504 1084713 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:22:15.536723 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .DriverName
	I0717 19:22:15.536885 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetState
	I0717 19:22:15.538713 1084713 fix.go:102] recreateIfNeeded on multinode-464644-m03: state=Running err=<nil>
	W0717 19:22:15.538741 1084713 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:22:15.541675 1084713 out.go:177] * Updating the running kvm2 "multinode-464644-m03" VM ...
	I0717 19:22:15.543672 1084713 machine.go:88] provisioning docker machine ...
	I0717 19:22:15.543715 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .DriverName
	I0717 19:22:15.544026 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetMachineName
	I0717 19:22:15.544244 1084713 buildroot.go:166] provisioning hostname "multinode-464644-m03"
	I0717 19:22:15.544270 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetMachineName
	I0717 19:22:15.544414 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHHostname
	I0717 19:22:15.547110 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:22:15.547549 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:92:19", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:12:15 +0000 UTC Type:0 Mac:52:54:00:77:92:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-464644-m03 Clientid:01:52:54:00:77:92:19}
	I0717 19:22:15.547582 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined IP address 192.168.39.247 and MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:22:15.547776 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHPort
	I0717 19:22:15.547973 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHKeyPath
	I0717 19:22:15.548133 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHKeyPath
	I0717 19:22:15.548300 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHUsername
	I0717 19:22:15.548445 1084713 main.go:141] libmachine: Using SSH client type: native
	I0717 19:22:15.548856 1084713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0717 19:22:15.548872 1084713 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-464644-m03 && echo "multinode-464644-m03" | sudo tee /etc/hostname
	I0717 19:22:15.696743 1084713 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-464644-m03
	
	I0717 19:22:15.696785 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHHostname
	I0717 19:22:15.700118 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:22:15.700405 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:92:19", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:12:15 +0000 UTC Type:0 Mac:52:54:00:77:92:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-464644-m03 Clientid:01:52:54:00:77:92:19}
	I0717 19:22:15.700441 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined IP address 192.168.39.247 and MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:22:15.700667 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHPort
	I0717 19:22:15.700922 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHKeyPath
	I0717 19:22:15.701102 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHKeyPath
	I0717 19:22:15.701246 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHUsername
	I0717 19:22:15.701406 1084713 main.go:141] libmachine: Using SSH client type: native
	I0717 19:22:15.701943 1084713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0717 19:22:15.701965 1084713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-464644-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-464644-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-464644-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:22:15.830949 1084713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:22:15.830981 1084713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:22:15.831009 1084713 buildroot.go:174] setting up certificates
	I0717 19:22:15.831019 1084713 provision.go:83] configureAuth start
	I0717 19:22:15.831028 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetMachineName
	I0717 19:22:15.831367 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetIP
	I0717 19:22:15.834253 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:22:15.834672 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:92:19", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:12:15 +0000 UTC Type:0 Mac:52:54:00:77:92:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-464644-m03 Clientid:01:52:54:00:77:92:19}
	I0717 19:22:15.834722 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined IP address 192.168.39.247 and MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:22:15.834854 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHHostname
	I0717 19:22:15.837478 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:22:15.837891 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:92:19", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:12:15 +0000 UTC Type:0 Mac:52:54:00:77:92:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-464644-m03 Clientid:01:52:54:00:77:92:19}
	I0717 19:22:15.837916 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined IP address 192.168.39.247 and MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:22:15.838201 1084713 provision.go:138] copyHostCerts
	I0717 19:22:15.838237 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:22:15.838279 1084713 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:22:15.838292 1084713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:22:15.838384 1084713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:22:15.838483 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:22:15.838516 1084713 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:22:15.838527 1084713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:22:15.838564 1084713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:22:15.838626 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:22:15.838654 1084713 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:22:15.838664 1084713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:22:15.838695 1084713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:22:15.838758 1084713 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.multinode-464644-m03 san=[192.168.39.247 192.168.39.247 localhost 127.0.0.1 minikube multinode-464644-m03]
	I0717 19:22:15.944761 1084713 provision.go:172] copyRemoteCerts
	I0717 19:22:15.944856 1084713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:22:15.944890 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHHostname
	I0717 19:22:15.947841 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:22:15.948175 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:92:19", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:12:15 +0000 UTC Type:0 Mac:52:54:00:77:92:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-464644-m03 Clientid:01:52:54:00:77:92:19}
	I0717 19:22:15.948198 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined IP address 192.168.39.247 and MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:22:15.948524 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHPort
	I0717 19:22:15.948785 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHKeyPath
	I0717 19:22:15.948992 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHUsername
	I0717 19:22:15.949156 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m03/id_rsa Username:docker}
	I0717 19:22:16.046374 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 19:22:16.046461 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:22:16.073342 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 19:22:16.073423 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0717 19:22:16.100042 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 19:22:16.100118 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:22:16.127729 1084713 provision.go:86] duration metric: configureAuth took 296.691003ms
	I0717 19:22:16.127771 1084713 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:22:16.128078 1084713 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:22:16.128164 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHHostname
	I0717 19:22:16.131339 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:22:16.131697 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:92:19", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:12:15 +0000 UTC Type:0 Mac:52:54:00:77:92:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-464644-m03 Clientid:01:52:54:00:77:92:19}
	I0717 19:22:16.131736 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined IP address 192.168.39.247 and MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:22:16.131941 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHPort
	I0717 19:22:16.132172 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHKeyPath
	I0717 19:22:16.132365 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHKeyPath
	I0717 19:22:16.132513 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHUsername
	I0717 19:22:16.132739 1084713 main.go:141] libmachine: Using SSH client type: native
	I0717 19:22:16.133140 1084713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0717 19:22:16.133156 1084713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:23:46.834824 1084713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:23:46.834866 1084713 machine.go:91] provisioned docker machine in 1m31.291174274s
	I0717 19:23:46.834882 1084713 start.go:300] post-start starting for "multinode-464644-m03" (driver="kvm2")
	I0717 19:23:46.834895 1084713 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:23:46.834955 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .DriverName
	I0717 19:23:46.835351 1084713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:23:46.835403 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHHostname
	I0717 19:23:46.838671 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:23:46.839109 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:92:19", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:12:15 +0000 UTC Type:0 Mac:52:54:00:77:92:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-464644-m03 Clientid:01:52:54:00:77:92:19}
	I0717 19:23:46.839142 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined IP address 192.168.39.247 and MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:23:46.839326 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHPort
	I0717 19:23:46.839542 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHKeyPath
	I0717 19:23:46.839792 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHUsername
	I0717 19:23:46.839972 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m03/id_rsa Username:docker}
	I0717 19:23:46.935684 1084713 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:23:46.940804 1084713 command_runner.go:130] > NAME=Buildroot
	I0717 19:23:46.940842 1084713 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0717 19:23:46.940850 1084713 command_runner.go:130] > ID=buildroot
	I0717 19:23:46.940858 1084713 command_runner.go:130] > VERSION_ID=2021.02.12
	I0717 19:23:46.940866 1084713 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0717 19:23:46.940918 1084713 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:23:46.940935 1084713 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:23:46.941022 1084713 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:23:46.941103 1084713 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:23:46.941119 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> /etc/ssl/certs/10689542.pem
	I0717 19:23:46.941209 1084713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:23:46.950419 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:23:46.977051 1084713 start.go:303] post-start completed in 142.152333ms
	I0717 19:23:46.977118 1084713 fix.go:56] fixHost completed within 1m31.458282278s
	I0717 19:23:46.977146 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHHostname
	I0717 19:23:46.980174 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:23:46.980575 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:92:19", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:12:15 +0000 UTC Type:0 Mac:52:54:00:77:92:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-464644-m03 Clientid:01:52:54:00:77:92:19}
	I0717 19:23:46.980631 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined IP address 192.168.39.247 and MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:23:46.980798 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHPort
	I0717 19:23:46.981003 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHKeyPath
	I0717 19:23:46.981120 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHKeyPath
	I0717 19:23:46.981228 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHUsername
	I0717 19:23:46.981378 1084713 main.go:141] libmachine: Using SSH client type: native
	I0717 19:23:46.981971 1084713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0717 19:23:46.981989 1084713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:23:47.115136 1084713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689621827.106695641
	
	I0717 19:23:47.115170 1084713 fix.go:206] guest clock: 1689621827.106695641
	I0717 19:23:47.115182 1084713 fix.go:219] Guest: 2023-07-17 19:23:47.106695641 +0000 UTC Remote: 2023-07-17 19:23:46.977123423 +0000 UTC m=+549.960981105 (delta=129.572218ms)
	I0717 19:23:47.115206 1084713 fix.go:190] guest clock delta is within tolerance: 129.572218ms
	I0717 19:23:47.115214 1084713 start.go:83] releasing machines lock for "multinode-464644-m03", held for 1m31.596394092s
	I0717 19:23:47.115244 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .DriverName
	I0717 19:23:47.115603 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetIP
	I0717 19:23:47.118760 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:23:47.119309 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:92:19", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:12:15 +0000 UTC Type:0 Mac:52:54:00:77:92:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-464644-m03 Clientid:01:52:54:00:77:92:19}
	I0717 19:23:47.119350 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined IP address 192.168.39.247 and MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:23:47.121739 1084713 out.go:177] * Found network options:
	I0717 19:23:47.124589 1084713 out.go:177]   - NO_PROXY=192.168.39.174,192.168.39.49
	W0717 19:23:47.126746 1084713 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 19:23:47.126779 1084713 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 19:23:47.126807 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .DriverName
	I0717 19:23:47.127662 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .DriverName
	I0717 19:23:47.127925 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .DriverName
	I0717 19:23:47.128041 1084713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:23:47.128101 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHHostname
	W0717 19:23:47.128119 1084713 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 19:23:47.128152 1084713 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 19:23:47.128247 1084713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:23:47.128277 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHHostname
	I0717 19:23:47.131261 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:23:47.131439 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:23:47.131694 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:92:19", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:12:15 +0000 UTC Type:0 Mac:52:54:00:77:92:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-464644-m03 Clientid:01:52:54:00:77:92:19}
	I0717 19:23:47.131725 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined IP address 192.168.39.247 and MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:23:47.131956 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:92:19", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:12:15 +0000 UTC Type:0 Mac:52:54:00:77:92:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-464644-m03 Clientid:01:52:54:00:77:92:19}
	I0717 19:23:47.132025 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHPort
	I0717 19:23:47.132117 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined IP address 192.168.39.247 and MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:23:47.132128 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHPort
	I0717 19:23:47.132227 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHKeyPath
	I0717 19:23:47.132319 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHKeyPath
	I0717 19:23:47.132409 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHUsername
	I0717 19:23:47.132482 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetSSHUsername
	I0717 19:23:47.132582 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m03/id_rsa Username:docker}
	I0717 19:23:47.132706 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m03/id_rsa Username:docker}
	I0717 19:23:47.378005 1084713 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 19:23:47.378173 1084713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:23:47.384335 1084713 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 19:23:47.384470 1084713 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:23:47.384554 1084713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:23:47.394889 1084713 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 19:23:47.394922 1084713 start.go:469] detecting cgroup driver to use...
	I0717 19:23:47.394996 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:23:47.412237 1084713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:23:47.427247 1084713 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:23:47.427312 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:23:47.445896 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:23:47.461242 1084713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:23:47.621827 1084713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:23:47.776613 1084713 docker.go:212] disabling docker service ...
	I0717 19:23:47.776709 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:23:47.793914 1084713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:23:47.809232 1084713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:23:47.962914 1084713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:23:48.117823 1084713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:23:48.133036 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:23:48.154319 1084713 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 19:23:48.154657 1084713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:23:48.154718 1084713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:23:48.166424 1084713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:23:48.166527 1084713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:23:48.179525 1084713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:23:48.192007 1084713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:23:48.204086 1084713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:23:48.215518 1084713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:23:48.225760 1084713 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 19:23:48.225867 1084713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:23:48.236897 1084713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:23:48.372391 1084713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:23:48.619567 1084713 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:23:48.619645 1084713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:23:48.625190 1084713 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 19:23:48.625229 1084713 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 19:23:48.625242 1084713 command_runner.go:130] > Device: 16h/22d	Inode: 1151        Links: 1
	I0717 19:23:48.625257 1084713 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:23:48.625264 1084713 command_runner.go:130] > Access: 2023-07-17 19:23:48.529032068 +0000
	I0717 19:23:48.625274 1084713 command_runner.go:130] > Modify: 2023-07-17 19:23:48.529032068 +0000
	I0717 19:23:48.625281 1084713 command_runner.go:130] > Change: 2023-07-17 19:23:48.529032068 +0000
	I0717 19:23:48.625287 1084713 command_runner.go:130] >  Birth: -
	I0717 19:23:48.625336 1084713 start.go:537] Will wait 60s for crictl version
	I0717 19:23:48.625402 1084713 ssh_runner.go:195] Run: which crictl
	I0717 19:23:48.629792 1084713 command_runner.go:130] > /usr/bin/crictl
	I0717 19:23:48.629880 1084713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:23:48.668584 1084713 command_runner.go:130] > Version:  0.1.0
	I0717 19:23:48.668618 1084713 command_runner.go:130] > RuntimeName:  cri-o
	I0717 19:23:48.668625 1084713 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0717 19:23:48.668633 1084713 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0717 19:23:48.668657 1084713 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:23:48.668734 1084713 ssh_runner.go:195] Run: crio --version
	I0717 19:23:48.717196 1084713 command_runner.go:130] > crio version 1.24.1
	I0717 19:23:48.717230 1084713 command_runner.go:130] > Version:          1.24.1
	I0717 19:23:48.717240 1084713 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 19:23:48.717246 1084713 command_runner.go:130] > GitTreeState:     dirty
	I0717 19:23:48.717254 1084713 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 19:23:48.717261 1084713 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 19:23:48.717267 1084713 command_runner.go:130] > Compiler:         gc
	I0717 19:23:48.717273 1084713 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:23:48.717281 1084713 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:23:48.717291 1084713 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:23:48.717297 1084713 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:23:48.717303 1084713 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:23:48.718938 1084713 ssh_runner.go:195] Run: crio --version
	I0717 19:23:48.780153 1084713 command_runner.go:130] > crio version 1.24.1
	I0717 19:23:48.780184 1084713 command_runner.go:130] > Version:          1.24.1
	I0717 19:23:48.780191 1084713 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 19:23:48.780196 1084713 command_runner.go:130] > GitTreeState:     dirty
	I0717 19:23:48.780202 1084713 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 19:23:48.780207 1084713 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 19:23:48.780211 1084713 command_runner.go:130] > Compiler:         gc
	I0717 19:23:48.780222 1084713 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:23:48.780228 1084713 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:23:48.780237 1084713 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:23:48.780241 1084713 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:23:48.780246 1084713 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:23:48.783075 1084713 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:23:48.784870 1084713 out.go:177]   - env NO_PROXY=192.168.39.174
	I0717 19:23:48.786844 1084713 out.go:177]   - env NO_PROXY=192.168.39.174,192.168.39.49
	I0717 19:23:48.788766 1084713 main.go:141] libmachine: (multinode-464644-m03) Calling .GetIP
	I0717 19:23:48.792211 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:23:48.792662 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:92:19", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:12:15 +0000 UTC Type:0 Mac:52:54:00:77:92:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:multinode-464644-m03 Clientid:01:52:54:00:77:92:19}
	I0717 19:23:48.792695 1084713 main.go:141] libmachine: (multinode-464644-m03) DBG | domain multinode-464644-m03 has defined IP address 192.168.39.247 and MAC address 52:54:00:77:92:19 in network mk-multinode-464644
	I0717 19:23:48.792902 1084713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:23:48.797839 1084713 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0717 19:23:48.797909 1084713 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644 for IP: 192.168.39.247
	I0717 19:23:48.797934 1084713 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:23:48.798109 1084713 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:23:48.798154 1084713 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:23:48.798167 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 19:23:48.798183 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 19:23:48.798194 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 19:23:48.798206 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 19:23:48.798263 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:23:48.798294 1084713 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:23:48.798306 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:23:48.798328 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:23:48.798356 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:23:48.798378 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:23:48.798423 1084713 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:23:48.798449 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> /usr/share/ca-certificates/10689542.pem
	I0717 19:23:48.798462 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:23:48.798472 1084713 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem -> /usr/share/ca-certificates/1068954.pem
	I0717 19:23:48.798947 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:23:48.825445 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:23:48.852159 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:23:48.880286 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:23:48.907913 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:23:48.935623 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:23:48.962960 1084713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:23:48.989347 1084713 ssh_runner.go:195] Run: openssl version
	I0717 19:23:48.996456 1084713 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0717 19:23:48.996553 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:23:49.007343 1084713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:23:49.012304 1084713 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:23:49.012351 1084713 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:23:49.012410 1084713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:23:49.017953 1084713 command_runner.go:130] > 3ec20f2e
	I0717 19:23:49.018260 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:23:49.026976 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:23:49.038491 1084713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:23:49.044166 1084713 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:23:49.044202 1084713 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:23:49.044270 1084713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:23:49.050486 1084713 command_runner.go:130] > b5213941
	I0717 19:23:49.050583 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:23:49.060202 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:23:49.071564 1084713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:23:49.076893 1084713 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:23:49.076935 1084713 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:23:49.076981 1084713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:23:49.083112 1084713 command_runner.go:130] > 51391683
	I0717 19:23:49.083295 1084713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:23:49.092990 1084713 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:23:49.097861 1084713 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 19:23:49.097912 1084713 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 19:23:49.098037 1084713 ssh_runner.go:195] Run: crio config
	I0717 19:23:49.164999 1084713 command_runner.go:130] ! time="2023-07-17 19:23:49.156901882Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0717 19:23:49.165060 1084713 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 19:23:49.180009 1084713 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 19:23:49.180049 1084713 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 19:23:49.180061 1084713 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 19:23:49.180067 1084713 command_runner.go:130] > #
	I0717 19:23:49.180086 1084713 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 19:23:49.180098 1084713 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 19:23:49.180110 1084713 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 19:23:49.180122 1084713 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 19:23:49.180133 1084713 command_runner.go:130] > # reload'.
	I0717 19:23:49.180153 1084713 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 19:23:49.180169 1084713 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 19:23:49.180184 1084713 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 19:23:49.180214 1084713 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 19:23:49.180225 1084713 command_runner.go:130] > [crio]
	I0717 19:23:49.180238 1084713 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 19:23:49.180248 1084713 command_runner.go:130] > # containers images, in this directory.
	I0717 19:23:49.180259 1084713 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 19:23:49.180275 1084713 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 19:23:49.180284 1084713 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 19:23:49.180294 1084713 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 19:23:49.180309 1084713 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 19:23:49.180322 1084713 command_runner.go:130] > storage_driver = "overlay"
	I0717 19:23:49.180333 1084713 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 19:23:49.180348 1084713 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 19:23:49.180360 1084713 command_runner.go:130] > storage_option = [
	I0717 19:23:49.180372 1084713 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 19:23:49.180382 1084713 command_runner.go:130] > ]
	I0717 19:23:49.180394 1084713 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 19:23:49.180406 1084713 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 19:23:49.180419 1084713 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 19:23:49.180433 1084713 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 19:23:49.180448 1084713 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 19:23:49.180460 1084713 command_runner.go:130] > # always happen on a node reboot
	I0717 19:23:49.180471 1084713 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 19:23:49.180487 1084713 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 19:23:49.180501 1084713 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 19:23:49.180520 1084713 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 19:23:49.180535 1084713 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 19:23:49.180552 1084713 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 19:23:49.180569 1084713 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 19:23:49.180581 1084713 command_runner.go:130] > # internal_wipe = true
	I0717 19:23:49.180592 1084713 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 19:23:49.180607 1084713 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 19:23:49.180621 1084713 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 19:23:49.180631 1084713 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 19:23:49.180646 1084713 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 19:23:49.180657 1084713 command_runner.go:130] > [crio.api]
	I0717 19:23:49.180671 1084713 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 19:23:49.180683 1084713 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 19:23:49.180694 1084713 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 19:23:49.180702 1084713 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 19:23:49.180718 1084713 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 19:23:49.180733 1084713 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 19:23:49.180744 1084713 command_runner.go:130] > # stream_port = "0"
	I0717 19:23:49.180756 1084713 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 19:23:49.180768 1084713 command_runner.go:130] > # stream_enable_tls = false
	I0717 19:23:49.180781 1084713 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 19:23:49.180791 1084713 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 19:23:49.180802 1084713 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 19:23:49.180817 1084713 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 19:23:49.180824 1084713 command_runner.go:130] > # minutes.
	I0717 19:23:49.180836 1084713 command_runner.go:130] > # stream_tls_cert = ""
	I0717 19:23:49.180852 1084713 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 19:23:49.180867 1084713 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 19:23:49.180878 1084713 command_runner.go:130] > # stream_tls_key = ""
	I0717 19:23:49.180890 1084713 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 19:23:49.180900 1084713 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 19:23:49.180914 1084713 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 19:23:49.180926 1084713 command_runner.go:130] > # stream_tls_ca = ""
	I0717 19:23:49.180943 1084713 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:23:49.180959 1084713 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 19:23:49.180976 1084713 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:23:49.180985 1084713 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 19:23:49.181018 1084713 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 19:23:49.181029 1084713 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 19:23:49.181037 1084713 command_runner.go:130] > [crio.runtime]
	I0717 19:23:49.181048 1084713 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 19:23:49.181059 1084713 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 19:23:49.181065 1084713 command_runner.go:130] > # "nofile=1024:2048"
	I0717 19:23:49.181073 1084713 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 19:23:49.181081 1084713 command_runner.go:130] > # default_ulimits = [
	I0717 19:23:49.181087 1084713 command_runner.go:130] > # ]
	I0717 19:23:49.181099 1084713 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 19:23:49.181106 1084713 command_runner.go:130] > # no_pivot = false
	I0717 19:23:49.181117 1084713 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 19:23:49.181132 1084713 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 19:23:49.181145 1084713 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 19:23:49.181154 1084713 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 19:23:49.181168 1084713 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 19:23:49.181185 1084713 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:23:49.181194 1084713 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 19:23:49.181206 1084713 command_runner.go:130] > # Cgroup setting for conmon
	I0717 19:23:49.181220 1084713 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 19:23:49.181231 1084713 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 19:23:49.181246 1084713 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 19:23:49.181257 1084713 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 19:23:49.181268 1084713 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:23:49.181279 1084713 command_runner.go:130] > conmon_env = [
	I0717 19:23:49.181290 1084713 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 19:23:49.181300 1084713 command_runner.go:130] > ]
	I0717 19:23:49.181312 1084713 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 19:23:49.181325 1084713 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 19:23:49.181339 1084713 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 19:23:49.181350 1084713 command_runner.go:130] > # default_env = [
	I0717 19:23:49.181355 1084713 command_runner.go:130] > # ]
	I0717 19:23:49.181363 1084713 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 19:23:49.181374 1084713 command_runner.go:130] > # selinux = false
	I0717 19:23:49.181386 1084713 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 19:23:49.181402 1084713 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 19:23:49.181416 1084713 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 19:23:49.181427 1084713 command_runner.go:130] > # seccomp_profile = ""
	I0717 19:23:49.181442 1084713 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 19:23:49.181457 1084713 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 19:23:49.181475 1084713 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 19:23:49.181499 1084713 command_runner.go:130] > # which might increase security.
	I0717 19:23:49.181512 1084713 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 19:23:49.181528 1084713 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 19:23:49.181541 1084713 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 19:23:49.181578 1084713 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 19:23:49.181595 1084713 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 19:23:49.181605 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:23:49.181618 1084713 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 19:23:49.181633 1084713 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 19:23:49.181645 1084713 command_runner.go:130] > # the cgroup blockio controller.
	I0717 19:23:49.181658 1084713 command_runner.go:130] > # blockio_config_file = ""
	I0717 19:23:49.181670 1084713 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 19:23:49.181681 1084713 command_runner.go:130] > # irqbalance daemon.
	I0717 19:23:49.181695 1084713 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 19:23:49.181707 1084713 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 19:23:49.181721 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:23:49.181732 1084713 command_runner.go:130] > # rdt_config_file = ""
	I0717 19:23:49.181746 1084713 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 19:23:49.181757 1084713 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 19:23:49.181767 1084713 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 19:23:49.181778 1084713 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 19:23:49.181793 1084713 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 19:23:49.181807 1084713 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 19:23:49.181819 1084713 command_runner.go:130] > # will be added.
	I0717 19:23:49.181830 1084713 command_runner.go:130] > # default_capabilities = [
	I0717 19:23:49.181837 1084713 command_runner.go:130] > # 	"CHOWN",
	I0717 19:23:49.181848 1084713 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 19:23:49.181855 1084713 command_runner.go:130] > # 	"FSETID",
	I0717 19:23:49.181865 1084713 command_runner.go:130] > # 	"FOWNER",
	I0717 19:23:49.181872 1084713 command_runner.go:130] > # 	"SETGID",
	I0717 19:23:49.181883 1084713 command_runner.go:130] > # 	"SETUID",
	I0717 19:23:49.181891 1084713 command_runner.go:130] > # 	"SETPCAP",
	I0717 19:23:49.181903 1084713 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 19:23:49.181914 1084713 command_runner.go:130] > # 	"KILL",
	I0717 19:23:49.181924 1084713 command_runner.go:130] > # ]
	I0717 19:23:49.181935 1084713 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 19:23:49.181950 1084713 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:23:49.181957 1084713 command_runner.go:130] > # default_sysctls = [
	I0717 19:23:49.181968 1084713 command_runner.go:130] > # ]
	I0717 19:23:49.181980 1084713 command_runner.go:130] > # List of devices on the host that a
	I0717 19:23:49.181995 1084713 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 19:23:49.182007 1084713 command_runner.go:130] > # allowed_devices = [
	I0717 19:23:49.182017 1084713 command_runner.go:130] > # 	"/dev/fuse",
	I0717 19:23:49.182024 1084713 command_runner.go:130] > # ]
	I0717 19:23:49.182035 1084713 command_runner.go:130] > # List of additional devices. specified as
	I0717 19:23:49.182052 1084713 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 19:23:49.182075 1084713 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 19:23:49.182107 1084713 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:23:49.182115 1084713 command_runner.go:130] > # additional_devices = [
	I0717 19:23:49.182119 1084713 command_runner.go:130] > # ]
	I0717 19:23:49.182124 1084713 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 19:23:49.182129 1084713 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 19:23:49.182134 1084713 command_runner.go:130] > # 	"/etc/cdi",
	I0717 19:23:49.182139 1084713 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 19:23:49.182142 1084713 command_runner.go:130] > # ]
	I0717 19:23:49.182150 1084713 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 19:23:49.182158 1084713 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 19:23:49.182164 1084713 command_runner.go:130] > # Defaults to false.
	I0717 19:23:49.182169 1084713 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 19:23:49.182178 1084713 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 19:23:49.182184 1084713 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 19:23:49.182189 1084713 command_runner.go:130] > # hooks_dir = [
	I0717 19:23:49.182193 1084713 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 19:23:49.182200 1084713 command_runner.go:130] > # ]
	I0717 19:23:49.182206 1084713 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 19:23:49.182215 1084713 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 19:23:49.182220 1084713 command_runner.go:130] > # its default mounts from the following two files:
	I0717 19:23:49.182226 1084713 command_runner.go:130] > #
	I0717 19:23:49.182232 1084713 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 19:23:49.182242 1084713 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 19:23:49.182247 1084713 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 19:23:49.182253 1084713 command_runner.go:130] > #
	I0717 19:23:49.182260 1084713 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 19:23:49.182268 1084713 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 19:23:49.182275 1084713 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 19:23:49.182282 1084713 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 19:23:49.182286 1084713 command_runner.go:130] > #
	I0717 19:23:49.182290 1084713 command_runner.go:130] > # default_mounts_file = ""
	I0717 19:23:49.182301 1084713 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 19:23:49.182308 1084713 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 19:23:49.182316 1084713 command_runner.go:130] > pids_limit = 1024
	I0717 19:23:49.182323 1084713 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 19:23:49.182333 1084713 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 19:23:49.182342 1084713 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 19:23:49.182350 1084713 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 19:23:49.182357 1084713 command_runner.go:130] > # log_size_max = -1
	I0717 19:23:49.182367 1084713 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 19:23:49.182374 1084713 command_runner.go:130] > # log_to_journald = false
	I0717 19:23:49.182381 1084713 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 19:23:49.182389 1084713 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 19:23:49.182395 1084713 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 19:23:49.182403 1084713 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 19:23:49.182436 1084713 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 19:23:49.182444 1084713 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 19:23:49.182450 1084713 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 19:23:49.182454 1084713 command_runner.go:130] > # read_only = false
	I0717 19:23:49.182462 1084713 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 19:23:49.182469 1084713 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 19:23:49.182482 1084713 command_runner.go:130] > # live configuration reload.
	I0717 19:23:49.182489 1084713 command_runner.go:130] > # log_level = "info"
	I0717 19:23:49.182496 1084713 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 19:23:49.182504 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:23:49.182508 1084713 command_runner.go:130] > # log_filter = ""
	I0717 19:23:49.182517 1084713 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 19:23:49.182524 1084713 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 19:23:49.182531 1084713 command_runner.go:130] > # separated by comma.
	I0717 19:23:49.182535 1084713 command_runner.go:130] > # uid_mappings = ""
	I0717 19:23:49.182541 1084713 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 19:23:49.182550 1084713 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 19:23:49.182555 1084713 command_runner.go:130] > # separated by comma.
	I0717 19:23:49.182564 1084713 command_runner.go:130] > # gid_mappings = ""
	I0717 19:23:49.182570 1084713 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 19:23:49.182579 1084713 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:23:49.182586 1084713 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:23:49.182593 1084713 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 19:23:49.182599 1084713 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 19:23:49.182608 1084713 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:23:49.182617 1084713 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:23:49.182623 1084713 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 19:23:49.182629 1084713 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 19:23:49.182636 1084713 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 19:23:49.182643 1084713 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 19:23:49.182650 1084713 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 19:23:49.182656 1084713 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 19:23:49.182664 1084713 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 19:23:49.182670 1084713 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 19:23:49.182676 1084713 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 19:23:49.182683 1084713 command_runner.go:130] > drop_infra_ctr = false
	I0717 19:23:49.182692 1084713 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 19:23:49.182698 1084713 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 19:23:49.182707 1084713 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 19:23:49.182714 1084713 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 19:23:49.182720 1084713 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 19:23:49.182728 1084713 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 19:23:49.182733 1084713 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 19:23:49.182743 1084713 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 19:23:49.182748 1084713 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 19:23:49.182754 1084713 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 19:23:49.182764 1084713 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 19:23:49.182773 1084713 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 19:23:49.182780 1084713 command_runner.go:130] > # default_runtime = "runc"
	I0717 19:23:49.182786 1084713 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 19:23:49.182795 1084713 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 19:23:49.182807 1084713 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 19:23:49.182815 1084713 command_runner.go:130] > # creation as a file is not desired either.
	I0717 19:23:49.182824 1084713 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 19:23:49.182832 1084713 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 19:23:49.182837 1084713 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 19:23:49.182840 1084713 command_runner.go:130] > # ]
	I0717 19:23:49.182847 1084713 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 19:23:49.182856 1084713 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 19:23:49.182863 1084713 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 19:23:49.182872 1084713 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 19:23:49.182878 1084713 command_runner.go:130] > #
	I0717 19:23:49.182884 1084713 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 19:23:49.182892 1084713 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 19:23:49.182897 1084713 command_runner.go:130] > #  runtime_type = "oci"
	I0717 19:23:49.182904 1084713 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 19:23:49.182910 1084713 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 19:23:49.182917 1084713 command_runner.go:130] > #  allowed_annotations = []
	I0717 19:23:49.182921 1084713 command_runner.go:130] > # Where:
	I0717 19:23:49.182929 1084713 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 19:23:49.182936 1084713 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 19:23:49.182945 1084713 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 19:23:49.182952 1084713 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 19:23:49.182958 1084713 command_runner.go:130] > #   in $PATH.
	I0717 19:23:49.182967 1084713 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 19:23:49.182975 1084713 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 19:23:49.182981 1084713 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 19:23:49.182984 1084713 command_runner.go:130] > #   state.
	I0717 19:23:49.182990 1084713 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 19:23:49.182996 1084713 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 19:23:49.183006 1084713 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 19:23:49.183011 1084713 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 19:23:49.183018 1084713 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 19:23:49.183024 1084713 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 19:23:49.183029 1084713 command_runner.go:130] > #   The currently recognized values are:
	I0717 19:23:49.183035 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 19:23:49.183042 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 19:23:49.183053 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 19:23:49.183060 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 19:23:49.183068 1084713 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 19:23:49.183075 1084713 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 19:23:49.183081 1084713 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 19:23:49.183087 1084713 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 19:23:49.183095 1084713 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 19:23:49.183099 1084713 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 19:23:49.183106 1084713 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 19:23:49.183110 1084713 command_runner.go:130] > runtime_type = "oci"
	I0717 19:23:49.183117 1084713 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 19:23:49.183123 1084713 command_runner.go:130] > runtime_config_path = ""
	I0717 19:23:49.183130 1084713 command_runner.go:130] > monitor_path = ""
	I0717 19:23:49.183135 1084713 command_runner.go:130] > monitor_cgroup = ""
	I0717 19:23:49.183142 1084713 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 19:23:49.183149 1084713 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 19:23:49.183156 1084713 command_runner.go:130] > # running containers
	I0717 19:23:49.183161 1084713 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 19:23:49.183170 1084713 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 19:23:49.183199 1084713 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 19:23:49.183208 1084713 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 19:23:49.183213 1084713 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 19:23:49.183218 1084713 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 19:23:49.183226 1084713 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 19:23:49.183231 1084713 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 19:23:49.183238 1084713 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 19:23:49.183243 1084713 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 19:23:49.183252 1084713 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 19:23:49.183261 1084713 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 19:23:49.183270 1084713 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 19:23:49.183277 1084713 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 19:23:49.183288 1084713 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 19:23:49.183294 1084713 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 19:23:49.183306 1084713 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 19:23:49.183314 1084713 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 19:23:49.183322 1084713 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 19:23:49.183329 1084713 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 19:23:49.183336 1084713 command_runner.go:130] > # Example:
	I0717 19:23:49.183341 1084713 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 19:23:49.183349 1084713 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 19:23:49.183354 1084713 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 19:23:49.183361 1084713 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 19:23:49.183365 1084713 command_runner.go:130] > # cpuset = 0
	I0717 19:23:49.183373 1084713 command_runner.go:130] > # cpushares = "0-1"
	I0717 19:23:49.183376 1084713 command_runner.go:130] > # Where:
	I0717 19:23:49.183381 1084713 command_runner.go:130] > # The workload name is workload-type.
	I0717 19:23:49.183390 1084713 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 19:23:49.183398 1084713 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 19:23:49.183408 1084713 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 19:23:49.183418 1084713 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 19:23:49.183427 1084713 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 19:23:49.183432 1084713 command_runner.go:130] > # 
	I0717 19:23:49.183441 1084713 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 19:23:49.183448 1084713 command_runner.go:130] > #
	I0717 19:23:49.183454 1084713 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 19:23:49.183463 1084713 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 19:23:49.183470 1084713 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 19:23:49.183482 1084713 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 19:23:49.183491 1084713 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 19:23:49.183495 1084713 command_runner.go:130] > [crio.image]
	I0717 19:23:49.183501 1084713 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 19:23:49.183508 1084713 command_runner.go:130] > # default_transport = "docker://"
	I0717 19:23:49.183515 1084713 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 19:23:49.183521 1084713 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:23:49.183525 1084713 command_runner.go:130] > # global_auth_file = ""
	I0717 19:23:49.183531 1084713 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 19:23:49.183542 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:23:49.183549 1084713 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 19:23:49.183556 1084713 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 19:23:49.183565 1084713 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:23:49.183570 1084713 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:23:49.183577 1084713 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 19:23:49.183584 1084713 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 19:23:49.183593 1084713 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 19:23:49.183599 1084713 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 19:23:49.183608 1084713 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 19:23:49.183612 1084713 command_runner.go:130] > # pause_command = "/pause"
	I0717 19:23:49.183621 1084713 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 19:23:49.183628 1084713 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 19:23:49.183637 1084713 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 19:23:49.183644 1084713 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 19:23:49.183652 1084713 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 19:23:49.183657 1084713 command_runner.go:130] > # signature_policy = ""
	I0717 19:23:49.183668 1084713 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 19:23:49.183677 1084713 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 19:23:49.183682 1084713 command_runner.go:130] > # changing them here.
	I0717 19:23:49.183686 1084713 command_runner.go:130] > # insecure_registries = [
	I0717 19:23:49.183690 1084713 command_runner.go:130] > # ]
	I0717 19:23:49.183699 1084713 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 19:23:49.183707 1084713 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 19:23:49.183712 1084713 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 19:23:49.183719 1084713 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 19:23:49.183724 1084713 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 19:23:49.183733 1084713 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 19:23:49.183737 1084713 command_runner.go:130] > # CNI plugins.
	I0717 19:23:49.183741 1084713 command_runner.go:130] > [crio.network]
	I0717 19:23:49.183749 1084713 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 19:23:49.183755 1084713 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 19:23:49.183762 1084713 command_runner.go:130] > # cni_default_network = ""
	I0717 19:23:49.183768 1084713 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 19:23:49.183775 1084713 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 19:23:49.183782 1084713 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 19:23:49.183789 1084713 command_runner.go:130] > # plugin_dirs = [
	I0717 19:23:49.183793 1084713 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 19:23:49.183798 1084713 command_runner.go:130] > # ]
	I0717 19:23:49.183806 1084713 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 19:23:49.183811 1084713 command_runner.go:130] > [crio.metrics]
	I0717 19:23:49.183819 1084713 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 19:23:49.183823 1084713 command_runner.go:130] > enable_metrics = true
	I0717 19:23:49.183831 1084713 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 19:23:49.183836 1084713 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 19:23:49.183845 1084713 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 19:23:49.183851 1084713 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 19:23:49.183860 1084713 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 19:23:49.183865 1084713 command_runner.go:130] > # metrics_collectors = [
	I0717 19:23:49.183871 1084713 command_runner.go:130] > # 	"operations",
	I0717 19:23:49.183876 1084713 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 19:23:49.183883 1084713 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 19:23:49.183889 1084713 command_runner.go:130] > # 	"operations_errors",
	I0717 19:23:49.183902 1084713 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 19:23:49.183912 1084713 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 19:23:49.183919 1084713 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 19:23:49.183930 1084713 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 19:23:49.183940 1084713 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 19:23:49.183948 1084713 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 19:23:49.183959 1084713 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 19:23:49.183966 1084713 command_runner.go:130] > # 	"containers_oom_total",
	I0717 19:23:49.183970 1084713 command_runner.go:130] > # 	"containers_oom",
	I0717 19:23:49.183974 1084713 command_runner.go:130] > # 	"processes_defunct",
	I0717 19:23:49.183978 1084713 command_runner.go:130] > # 	"operations_total",
	I0717 19:23:49.183986 1084713 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 19:23:49.183991 1084713 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 19:23:49.183998 1084713 command_runner.go:130] > # 	"operations_errors_total",
	I0717 19:23:49.184002 1084713 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 19:23:49.184009 1084713 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 19:23:49.184014 1084713 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 19:23:49.184021 1084713 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 19:23:49.184026 1084713 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 19:23:49.184033 1084713 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 19:23:49.184037 1084713 command_runner.go:130] > # ]
	I0717 19:23:49.184045 1084713 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 19:23:49.184051 1084713 command_runner.go:130] > # metrics_port = 9090
	I0717 19:23:49.184057 1084713 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 19:23:49.184064 1084713 command_runner.go:130] > # metrics_socket = ""
	I0717 19:23:49.184069 1084713 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 19:23:49.184077 1084713 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 19:23:49.184086 1084713 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 19:23:49.184094 1084713 command_runner.go:130] > # certificate on any modification event.
	I0717 19:23:49.184100 1084713 command_runner.go:130] > # metrics_cert = ""
	I0717 19:23:49.184106 1084713 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 19:23:49.184114 1084713 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 19:23:49.184119 1084713 command_runner.go:130] > # metrics_key = ""
	I0717 19:23:49.184124 1084713 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 19:23:49.184131 1084713 command_runner.go:130] > [crio.tracing]
	I0717 19:23:49.184137 1084713 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 19:23:49.184146 1084713 command_runner.go:130] > # enable_tracing = false
	I0717 19:23:49.184155 1084713 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 19:23:49.184163 1084713 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 19:23:49.184168 1084713 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 19:23:49.184176 1084713 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 19:23:49.184182 1084713 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 19:23:49.184189 1084713 command_runner.go:130] > [crio.stats]
	I0717 19:23:49.184194 1084713 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 19:23:49.184202 1084713 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 19:23:49.184207 1084713 command_runner.go:130] > # stats_collection_period = 0
	I0717 19:23:49.184289 1084713 cni.go:84] Creating CNI manager for ""
	I0717 19:23:49.184302 1084713 cni.go:137] 3 nodes found, recommending kindnet
	I0717 19:23:49.184312 1084713 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:23:49.184331 1084713 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-464644 NodeName:multinode-464644-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:23:49.184463 1084713 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-464644-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:23:49.184573 1084713 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-464644-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-464644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:23:49.184643 1084713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:23:49.195986 1084713 command_runner.go:130] > kubeadm
	I0717 19:23:49.196017 1084713 command_runner.go:130] > kubectl
	I0717 19:23:49.196024 1084713 command_runner.go:130] > kubelet
	I0717 19:23:49.196116 1084713 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:23:49.196203 1084713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 19:23:49.206126 1084713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0717 19:23:49.226053 1084713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:23:49.246315 1084713 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0717 19:23:49.250797 1084713 command_runner.go:130] > 192.168.39.174	control-plane.minikube.internal
	I0717 19:23:49.250923 1084713 host.go:66] Checking if "multinode-464644" exists ...
	I0717 19:23:49.251196 1084713 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:23:49.251368 1084713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:23:49.251422 1084713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:23:49.267987 1084713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42857
	I0717 19:23:49.268599 1084713 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:23:49.269166 1084713 main.go:141] libmachine: Using API Version  1
	I0717 19:23:49.269191 1084713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:23:49.269656 1084713 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:23:49.269901 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:23:49.270072 1084713 start.go:304] JoinCluster: &{Name:multinode-464644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-464644 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.49 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.247 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio
-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}

                                                
                                                
	I0717 19:23:49.270201 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 19:23:49.270220 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:23:49.273301 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:23:49.273729 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:23:49.273764 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:23:49.273979 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:23:49.274192 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:23:49.274379 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:23:49.274493 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:23:49.446611 1084713 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0qb7y4.5189g68e956hv9kj --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 19:23:49.448770 1084713 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.247 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0717 19:23:49.448824 1084713 host.go:66] Checking if "multinode-464644" exists ...
	I0717 19:23:49.449144 1084713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:23:49.449192 1084713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:23:49.464959 1084713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38207
	I0717 19:23:49.465449 1084713 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:23:49.466014 1084713 main.go:141] libmachine: Using API Version  1
	I0717 19:23:49.466046 1084713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:23:49.466429 1084713 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:23:49.466647 1084713 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:23:49.466939 1084713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-464644-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0717 19:23:49.466970 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:23:49.470059 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:23:49.470554 1084713 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:19:47 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:23:49.470587 1084713 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:23:49.470840 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:23:49.471060 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:23:49.471263 1084713 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:23:49.471437 1084713 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:23:49.680618 1084713 command_runner.go:130] > node/multinode-464644-m03 cordoned
	I0717 19:23:52.726107 1084713 command_runner.go:130] > pod "busybox-67b7f59bb-ftsv6" has DeletionTimestamp older than 1 seconds, skipping
	I0717 19:23:52.726145 1084713 command_runner.go:130] > node/multinode-464644-m03 drained
	I0717 19:23:52.727935 1084713 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0717 19:23:52.728005 1084713 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-znndf, kube-system/kube-proxy-56qvt
	I0717 19:23:52.728055 1084713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-464644-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.261080571s)
	I0717 19:23:52.728088 1084713 node.go:108] successfully drained node "m03"
	I0717 19:23:52.728473 1084713 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:23:52.728741 1084713 kapi.go:59] client config for multinode-464644: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:23:52.729063 1084713 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0717 19:23:52.729147 1084713 round_trippers.go:463] DELETE https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m03
	I0717 19:23:52.729157 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:52.729170 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:52.729180 1084713 round_trippers.go:473]     Content-Type: application/json
	I0717 19:23:52.729194 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:52.747255 1084713 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0717 19:23:52.747285 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:52.747293 1084713 round_trippers.go:580]     Content-Length: 171
	I0717 19:23:52.747299 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:52 GMT
	I0717 19:23:52.747304 1084713 round_trippers.go:580]     Audit-Id: e38b31e1-cac3-416e-9d23-0aeb0e64c56e
	I0717 19:23:52.747309 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:52.747315 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:52.747320 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:52.747326 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:52.747349 1084713 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-464644-m03","kind":"nodes","uid":"78befe00-f3c3-4f9c-86ff-aea572ef1c48"}}
	I0717 19:23:52.747380 1084713 node.go:124] successfully deleted node "m03"
	I0717 19:23:52.747390 1084713 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.247 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0717 19:23:52.747412 1084713 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.247 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0717 19:23:52.747435 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0qb7y4.5189g68e956hv9kj --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-464644-m03"
	I0717 19:23:52.812042 1084713 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 19:23:53.000905 1084713 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0717 19:23:53.000944 1084713 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0717 19:23:53.068459 1084713 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:23:53.068489 1084713 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:23:53.068730 1084713 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 19:23:53.231649 1084713 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0717 19:23:53.765683 1084713 command_runner.go:130] > This node has joined the cluster:
	I0717 19:23:53.765716 1084713 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0717 19:23:53.765723 1084713 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0717 19:23:53.765729 1084713 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0717 19:23:53.768333 1084713 command_runner.go:130] ! W0717 19:23:52.803714    2300 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0717 19:23:53.768356 1084713 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0717 19:23:53.768362 1084713 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0717 19:23:53.768370 1084713 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0717 19:23:53.768515 1084713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0qb7y4.5189g68e956hv9kj --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-464644-m03": (1.02104949s)
	I0717 19:23:53.768550 1084713 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 19:23:54.068736 1084713 start.go:306] JoinCluster complete in 4.798658872s
	I0717 19:23:54.068771 1084713 cni.go:84] Creating CNI manager for ""
	I0717 19:23:54.068779 1084713 cni.go:137] 3 nodes found, recommending kindnet
	I0717 19:23:54.068847 1084713 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 19:23:54.075810 1084713 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 19:23:54.075836 1084713 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0717 19:23:54.075843 1084713 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0717 19:23:54.075849 1084713 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:23:54.075855 1084713 command_runner.go:130] > Access: 2023-07-17 19:19:47.710331536 +0000
	I0717 19:23:54.075860 1084713 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0717 19:23:54.075871 1084713 command_runner.go:130] > Change: 2023-07-17 19:19:45.751331536 +0000
	I0717 19:23:54.075875 1084713 command_runner.go:130] >  Birth: -
	I0717 19:23:54.076402 1084713 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 19:23:54.076429 1084713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 19:23:54.095548 1084713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 19:23:54.556865 1084713 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0717 19:23:54.556906 1084713 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0717 19:23:54.556915 1084713 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0717 19:23:54.556922 1084713 command_runner.go:130] > daemonset.apps/kindnet configured
	I0717 19:23:54.557400 1084713 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:23:54.557756 1084713 kapi.go:59] client config for multinode-464644: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:23:54.558218 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 19:23:54.558236 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:54.558247 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:54.558257 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:54.561028 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:23:54.561051 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:54.561068 1084713 round_trippers.go:580]     Audit-Id: 24d55e52-c626-494c-a6b7-8e0977a14225
	I0717 19:23:54.561076 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:54.561083 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:54.561092 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:54.561100 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:54.561111 1084713 round_trippers.go:580]     Content-Length: 291
	I0717 19:23:54.561125 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:54 GMT
	I0717 19:23:54.561162 1084713 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"06c3326f-def8-45bf-a91d-f07feefe253d","resourceVersion":"892","creationTimestamp":"2023-07-17T19:09:54Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0717 19:23:54.561285 1084713 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-464644" context rescaled to 1 replicas
	I0717 19:23:54.561357 1084713 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.247 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0717 19:23:54.563941 1084713 out.go:177] * Verifying Kubernetes components...
	I0717 19:23:54.565802 1084713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:23:54.581412 1084713 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:23:54.581682 1084713 kapi.go:59] client config for multinode-464644: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/multinode-464644/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:23:54.581989 1084713 node_ready.go:35] waiting up to 6m0s for node "multinode-464644-m03" to be "Ready" ...
	I0717 19:23:54.582076 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m03
	I0717 19:23:54.582085 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:54.582092 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:54.582099 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:54.584880 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:23:54.584907 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:54.584917 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:54.584926 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:54.584936 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:54.584943 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:54 GMT
	I0717 19:23:54.584950 1084713 round_trippers.go:580]     Audit-Id: c118f151-e150-4945-b57c-e250b3616c70
	I0717 19:23:54.584958 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:54.585116 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m03","uid":"5cbc2511-38dd-4577-bb1a-bdb52b8e9f28","resourceVersion":"1201","creationTimestamp":"2023-07-17T19:23:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:23:53Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0717 19:23:54.585475 1084713 node_ready.go:49] node "multinode-464644-m03" has status "Ready":"True"
	I0717 19:23:54.585501 1084713 node_ready.go:38] duration metric: took 3.491005ms waiting for node "multinode-464644-m03" to be "Ready" ...
	I0717 19:23:54.585513 1084713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:23:54.585610 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0717 19:23:54.585624 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:54.585635 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:54.585644 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:54.590597 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:23:54.590628 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:54.590637 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:54.590644 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:54 GMT
	I0717 19:23:54.590649 1084713 round_trippers.go:580]     Audit-Id: df31677a-0738-400d-98b6-68e2142751c6
	I0717 19:23:54.590654 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:54.590659 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:54.590664 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:54.592341 1084713 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"873","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82082 chars]
	I0717 19:23:54.594955 1084713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:54.595052 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-wqj4s
	I0717 19:23:54.595068 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:54.595079 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:54.595090 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:54.597924 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:23:54.597945 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:54.597956 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:54.597966 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:54 GMT
	I0717 19:23:54.597977 1084713 round_trippers.go:580]     Audit-Id: 763d01f2-69c0-4e2b-83a3-652fe1a475e9
	I0717 19:23:54.597986 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:54.597996 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:54.598008 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:54.598159 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-wqj4s","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991","resourceVersion":"873","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"8369cc45-03bf-4784-a3b1-d46615923fd9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8369cc45-03bf-4784-a3b1-d46615923fd9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0717 19:23:54.598628 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:23:54.598641 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:54.598648 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:54.598655 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:54.601488 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:23:54.601516 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:54.601523 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:54 GMT
	I0717 19:23:54.601529 1084713 round_trippers.go:580]     Audit-Id: 09e3aef7-515e-497e-a7d3-1bcc747aa3af
	I0717 19:23:54.601535 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:54.601540 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:54.601546 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:54.601551 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:54.601886 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 19:23:54.602214 1084713 pod_ready.go:92] pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace has status "Ready":"True"
	I0717 19:23:54.602230 1084713 pod_ready.go:81] duration metric: took 7.248009ms waiting for pod "coredns-5d78c9869d-wqj4s" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:54.602260 1084713 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:54.602320 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-464644
	I0717 19:23:54.602329 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:54.602336 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:54.602342 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:54.605494 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:23:54.605518 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:54.605525 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:54 GMT
	I0717 19:23:54.605531 1084713 round_trippers.go:580]     Audit-Id: 9b2de8be-1ac3-4843-934d-0386da762126
	I0717 19:23:54.605536 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:54.605543 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:54.605548 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:54.605554 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:54.605849 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-464644","namespace":"kube-system","uid":"b672d395-d32d-4198-b486-d9cff48d8b9a","resourceVersion":"884","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.mirror":"d5b599b6912e0d4b30d78bf7b7e52672","kubernetes.io/config.seen":"2023-07-17T19:09:54.339578401Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0717 19:23:54.606280 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:23:54.606291 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:54.606299 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:54.606305 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:54.608844 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:23:54.608860 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:54.608866 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:54.608872 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:54.608878 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:54.608886 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:54 GMT
	I0717 19:23:54.608895 1084713 round_trippers.go:580]     Audit-Id: 925cb2f5-9f6e-4090-bd80-43fc72b25129
	I0717 19:23:54.608908 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:54.609042 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 19:23:54.609352 1084713 pod_ready.go:92] pod "etcd-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:23:54.609368 1084713 pod_ready.go:81] duration metric: took 7.101854ms waiting for pod "etcd-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:54.609385 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:54.609444 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-464644
	I0717 19:23:54.609454 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:54.609464 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:54.609472 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:54.612037 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:23:54.612066 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:54.612075 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:54.612081 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:54.612086 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:54.612092 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:54.612097 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:54 GMT
	I0717 19:23:54.612102 1084713 round_trippers.go:580]     Audit-Id: b512a2e5-0ab3-4040-97cc-dd285f9778d1
	I0717 19:23:54.612264 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-464644","namespace":"kube-system","uid":"dd6e14e2-0b92-42b9-b6a2-1562c2c70903","resourceVersion":"867","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.174:8443","kubernetes.io/config.hash":"b280034e13df00701aec7afc575fcc6c","kubernetes.io/config.mirror":"b280034e13df00701aec7afc575fcc6c","kubernetes.io/config.seen":"2023-07-17T19:09:54.339586957Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0717 19:23:54.612715 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:23:54.612729 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:54.612740 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:54.612749 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:54.616401 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:23:54.616426 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:54.616435 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:54.616444 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:54.616452 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:54.616460 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:54.616468 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:54 GMT
	I0717 19:23:54.616477 1084713 round_trippers.go:580]     Audit-Id: bdcea56c-925d-497c-93a6-5e3b430d6be2
	I0717 19:23:54.616726 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 19:23:54.617044 1084713 pod_ready.go:92] pod "kube-apiserver-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:23:54.617060 1084713 pod_ready.go:81] duration metric: took 7.666064ms waiting for pod "kube-apiserver-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:54.617075 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:54.617139 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-464644
	I0717 19:23:54.617148 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:54.617159 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:54.617172 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:54.620891 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:23:54.620919 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:54.620929 1084713 round_trippers.go:580]     Audit-Id: a2fd1252-7ee3-4273-b5ed-c6bfde139183
	I0717 19:23:54.620938 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:54.620946 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:54.620954 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:54.620963 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:54.620981 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:54 GMT
	I0717 19:23:54.622191 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-464644","namespace":"kube-system","uid":"6b598e8b-6c96-4014-b0a2-de37f107a0e9","resourceVersion":"880","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"323b8f41b30f0969feab8ff61a3ecabd","kubernetes.io/config.mirror":"323b8f41b30f0969feab8ff61a3ecabd","kubernetes.io/config.seen":"2023-07-17T19:09:54.339588566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0717 19:23:54.622654 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:23:54.622672 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:54.622684 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:54.622695 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:54.625262 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:23:54.625289 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:54.625297 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:54.625303 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:54.625309 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:54.625314 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:54.625319 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:54 GMT
	I0717 19:23:54.625328 1084713 round_trippers.go:580]     Audit-Id: 38b66f6c-5c9a-476f-b667-30e86af37f5e
	I0717 19:23:54.625482 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 19:23:54.625940 1084713 pod_ready.go:92] pod "kube-controller-manager-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:23:54.625963 1084713 pod_ready.go:81] duration metric: took 8.875896ms waiting for pod "kube-controller-manager-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:54.625977 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-56qvt" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:54.782388 1084713 request.go:628] Waited for 156.318227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56qvt
	I0717 19:23:54.782467 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56qvt
	I0717 19:23:54.782474 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:54.782490 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:54.782502 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:54.785749 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:23:54.785786 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:54.785797 1084713 round_trippers.go:580]     Audit-Id: cc676de6-b8ce-4ec0-bccf-336bd47e974a
	I0717 19:23:54.785806 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:54.785815 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:54.785822 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:54.785830 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:54.785839 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:54 GMT
	I0717 19:23:54.786062 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-56qvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"8207802f-ef88-4f7f-871c-bc528ef98b58","resourceVersion":"1205","creationTimestamp":"2023-07-17T19:11:40Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:11:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0717 19:23:54.983104 1084713 request.go:628] Waited for 196.48465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m03
	I0717 19:23:54.983181 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m03
	I0717 19:23:54.983187 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:54.983195 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:54.983205 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:54.986503 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:23:54.986542 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:54.986550 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:54.986555 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:54.986563 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:54.986568 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:54 GMT
	I0717 19:23:54.986574 1084713 round_trippers.go:580]     Audit-Id: 896591d8-4b0f-471a-b9f2-813465f40f20
	I0717 19:23:54.986582 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:54.986809 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m03","uid":"5cbc2511-38dd-4577-bb1a-bdb52b8e9f28","resourceVersion":"1201","creationTimestamp":"2023-07-17T19:23:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:23:53Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0717 19:23:55.488032 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56qvt
	I0717 19:23:55.488068 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:55.488080 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:55.488090 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:55.490676 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:23:55.490764 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:55.490777 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:55.490786 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:55.490795 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:55.490803 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:55 GMT
	I0717 19:23:55.490811 1084713 round_trippers.go:580]     Audit-Id: 0944ba8e-9eb0-4ed8-83f9-4a2f432e272b
	I0717 19:23:55.490821 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:55.491019 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-56qvt","generateName":"kube-proxy-","namespace":"kube-system","uid":"8207802f-ef88-4f7f-871c-bc528ef98b58","resourceVersion":"1218","creationTimestamp":"2023-07-17T19:11:40Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:11:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0717 19:23:55.491658 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m03
	I0717 19:23:55.491685 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:55.491697 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:55.491706 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:55.494253 1084713 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:23:55.494274 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:55.494280 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:55.494286 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:55.494291 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:55.494296 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:55 GMT
	I0717 19:23:55.494302 1084713 round_trippers.go:580]     Audit-Id: 672ee0c2-f735-4fc4-b5bd-771c3d441c8f
	I0717 19:23:55.494309 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:55.494479 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m03","uid":"5cbc2511-38dd-4577-bb1a-bdb52b8e9f28","resourceVersion":"1201","creationTimestamp":"2023-07-17T19:23:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:23:53Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0717 19:23:55.494747 1084713 pod_ready.go:92] pod "kube-proxy-56qvt" in "kube-system" namespace has status "Ready":"True"
	I0717 19:23:55.494764 1084713 pod_ready.go:81] duration metric: took 868.775691ms waiting for pod "kube-proxy-56qvt" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:55.494777 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6ds6" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:55.582233 1084713 request.go:628] Waited for 87.347075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6ds6
	I0717 19:23:55.582299 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6ds6
	I0717 19:23:55.582304 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:55.582312 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:55.582318 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:55.587385 1084713 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 19:23:55.587418 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:55.587426 1084713 round_trippers.go:580]     Audit-Id: e46c0cb2-345f-495e-9c43-b707179f8901
	I0717 19:23:55.587432 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:55.587437 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:55.587443 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:55.587448 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:55.587456 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:55 GMT
	I0717 19:23:55.587661 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j6ds6","generateName":"kube-proxy-","namespace":"kube-system","uid":"439bb5b7-0e46-4762-a9a7-e648a212ad93","resourceVersion":"1043","creationTimestamp":"2023-07-17T19:10:52Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0717 19:23:55.782590 1084713 request.go:628] Waited for 194.410092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:23:55.782685 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644-m02
	I0717 19:23:55.782693 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:55.782704 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:55.782715 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:55.786903 1084713 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:23:55.786938 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:55.786951 1084713 round_trippers.go:580]     Audit-Id: aee41705-c1f5-4eb0-8c21-658bace4aad5
	I0717 19:23:55.786958 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:55.786964 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:55.786971 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:55.786988 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:55.786997 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:55 GMT
	I0717 19:23:55.787109 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644-m02","uid":"8a7d3b54-fa08-45cf-b8cb-6e947d45ee9a","resourceVersion":"1027","creationTimestamp":"2023-07-17T19:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:22:12Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0717 19:23:55.787432 1084713 pod_ready.go:92] pod "kube-proxy-j6ds6" in "kube-system" namespace has status "Ready":"True"
	I0717 19:23:55.787451 1084713 pod_ready.go:81] duration metric: took 292.666333ms waiting for pod "kube-proxy-j6ds6" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:55.787468 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qwsn5" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:55.982845 1084713 request.go:628] Waited for 195.251358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qwsn5
	I0717 19:23:55.982929 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qwsn5
	I0717 19:23:55.982946 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:55.982962 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:55.982975 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:55.986490 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:23:55.986530 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:55.986538 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:55.986543 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:55 GMT
	I0717 19:23:55.986549 1084713 round_trippers.go:580]     Audit-Id: 79f73219-b86d-4122-8116-1e8cf4a4730f
	I0717 19:23:55.986554 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:55.986560 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:55.986565 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:55.986702 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qwsn5","generateName":"kube-proxy-","namespace":"kube-system","uid":"50e3f5e0-00d9-4412-b4de-649bc29733e9","resourceVersion":"776","creationTimestamp":"2023-07-17T19:10:08Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cab15633-0bd8-4e4c-a88b-c03af1462254","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:10:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cab15633-0bd8-4e4c-a88b-c03af1462254\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 19:23:56.182949 1084713 request.go:628] Waited for 195.728963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:23:56.183037 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:23:56.183044 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:56.183057 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:56.183068 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:56.186611 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:23:56.186648 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:56.186659 1084713 round_trippers.go:580]     Audit-Id: 5ca8f167-8d8d-4b0a-b1fe-2867c1334b06
	I0717 19:23:56.186668 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:56.186677 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:56.186684 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:56.186693 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:56.186700 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:56 GMT
	I0717 19:23:56.186915 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 19:23:56.187304 1084713 pod_ready.go:92] pod "kube-proxy-qwsn5" in "kube-system" namespace has status "Ready":"True"
	I0717 19:23:56.187322 1084713 pod_ready.go:81] duration metric: took 399.844019ms waiting for pod "kube-proxy-qwsn5" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:56.187334 1084713 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:56.382882 1084713 request.go:628] Waited for 195.440563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-464644
	I0717 19:23:56.382966 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-464644
	I0717 19:23:56.382975 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:56.382988 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:56.382999 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:56.386482 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:23:56.386516 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:56.386526 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:56.386536 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:56.386545 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:56.386554 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:56 GMT
	I0717 19:23:56.386564 1084713 round_trippers.go:580]     Audit-Id: 6cb96d08-9974-4310-bfbd-92c7e60454d5
	I0717 19:23:56.386575 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:56.387143 1084713 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-464644","namespace":"kube-system","uid":"04e5660d-abb0-432a-861e-c5c242edfb98","resourceVersion":"894","creationTimestamp":"2023-07-17T19:09:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6435a6b37c43f83175753c4199c85407","kubernetes.io/config.mirror":"6435a6b37c43f83175753c4199c85407","kubernetes.io/config.seen":"2023-07-17T19:09:54.339590320Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:09:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0717 19:23:56.583053 1084713 request.go:628] Waited for 195.448476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:23:56.583118 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-464644
	I0717 19:23:56.583123 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:56.583131 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:56.583137 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:56.587128 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:23:56.587159 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:56.587168 1084713 round_trippers.go:580]     Audit-Id: ff1ef459-e14a-4015-81b0-fb5e78e1e0a6
	I0717 19:23:56.587174 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:56.587179 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:56.587184 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:56.587190 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:56.587195 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:56 GMT
	I0717 19:23:56.587462 1084713 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:09:50Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 19:23:56.587889 1084713 pod_ready.go:92] pod "kube-scheduler-multinode-464644" in "kube-system" namespace has status "Ready":"True"
	I0717 19:23:56.587909 1084713 pod_ready.go:81] duration metric: took 400.56884ms waiting for pod "kube-scheduler-multinode-464644" in "kube-system" namespace to be "Ready" ...
	I0717 19:23:56.587920 1084713 pod_ready.go:38] duration metric: took 2.002395617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:23:56.587935 1084713 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:23:56.587983 1084713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:23:56.603628 1084713 system_svc.go:56] duration metric: took 15.681718ms WaitForService to wait for kubelet.
	I0717 19:23:56.603669 1084713 kubeadm.go:581] duration metric: took 2.042276712s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 19:23:56.603691 1084713 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:23:56.783228 1084713 request.go:628] Waited for 179.428125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes
	I0717 19:23:56.783291 1084713 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes
	I0717 19:23:56.783295 1084713 round_trippers.go:469] Request Headers:
	I0717 19:23:56.783304 1084713 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:23:56.783315 1084713 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:23:56.786637 1084713 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:23:56.786663 1084713 round_trippers.go:577] Response Headers:
	I0717 19:23:56.786671 1084713 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:23:56 GMT
	I0717 19:23:56.786681 1084713 round_trippers.go:580]     Audit-Id: 4414534f-e044-4c61-97d4-1cd70c3bf7d9
	I0717 19:23:56.786687 1084713 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:23:56.786692 1084713 round_trippers.go:580]     Content-Type: application/json
	I0717 19:23:56.786697 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 840054e6-1996-4e98-a9f1-5aa3a70e29db
	I0717 19:23:56.786703 1084713 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ced1ca0-8983-415d-969a-bdf12cad6bb9
	I0717 19:23:56.787239 1084713 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1220"},"items":[{"metadata":{"name":"multinode-464644","uid":"6e6d4f0d-6051-40f7-9779-ce3eaa806082","resourceVersion":"903","creationTimestamp":"2023-07-17T19:09:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-464644","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-464644","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_09_55_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15134 chars]
	I0717 19:23:56.787853 1084713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:23:56.787874 1084713 node_conditions.go:123] node cpu capacity is 2
	I0717 19:23:56.787885 1084713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:23:56.787889 1084713 node_conditions.go:123] node cpu capacity is 2
	I0717 19:23:56.787892 1084713 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:23:56.787897 1084713 node_conditions.go:123] node cpu capacity is 2
	I0717 19:23:56.787901 1084713 node_conditions.go:105] duration metric: took 184.205859ms to run NodePressure ...
	I0717 19:23:56.787912 1084713 start.go:228] waiting for startup goroutines ...
	I0717 19:23:56.787940 1084713 start.go:242] writing updated cluster config ...
	I0717 19:23:56.788305 1084713 ssh_runner.go:195] Run: rm -f paused
	I0717 19:23:56.842959 1084713 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 19:23:56.845843 1084713 out.go:177] * Done! kubectl is now configured to use "multinode-464644" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 19:19:46 UTC, ends at Mon 2023-07-17 19:23:58 UTC. --
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.006798620Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b1cf23ee-84f1-468e-bafb-eccdf300c79c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.007125300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa76b01e90d6176fd5bb3bb5637ce37ebc894a82f66dafa3121ae309b6d3af7a,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621652494682253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3b0ae9960e9aa82cebdd325142d6afabfa0dac7b02a58ba60b97744c9ca348,PodSandboxId:07a4750146b59e874e72c1dd833a1e9596f166f8b6d0d23e7ad1ad9621eb1088,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621630086056582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3193dffaf98a7432e9de2d2464bdfd2a7d41c53f4037f7e269bba581af9383,PodSandboxId:23bb7e83cd76790ac6bfd910be73fffe228508d985ed8cf05614cffd69ad53cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621628855534086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2736b5d4a342f2ac3d273066eac0f67e30169eb7c1f4d409ff79e15ed8bde8fd,PodSandboxId:b07bda47474b4aed62b046741591de768f5419e9d065683be986bc697a2c6ef3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621623704789843,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6cf4cd85bc334268040ca65da0dbd4ccc640a71d29ce44982ab3b8387eba6c1,PodSandboxId:d0294f361d339881e77c9ae8b19f6dfa4b674d33984e9b974592dfb52998cab5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621621393812337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa98acb609735e9c234208fe3e95dce147776f50db39a85536a2c6b58351ea35,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689621621252219050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d
8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdaab08279a165cb8f5bc9dcfb35d4bc25e1ae6218a7993b583ad588c4b38fa,PodSandboxId:e69d1b9180452baced7178c4ea1bab8b95e1aa3778740253feed4bc0f8f7ca3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689621615033920505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cf0d9749,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7b38e320c90c2756a9ba7c268250d0aa287f84c13cf3e2575b69b2a2cd704f1,PodSandboxId:1e56bd8d3694ecb2d196b820a73eb1755ee6c875e1ffdf7e3dffa54063489931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689621614825188558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd3855d96bf48583c05bf555f0dc2582eda3385ab563cee3c7568005b793d21,PodSandboxId:ce949d273e823908e1178147515474da84a9ee536185b82a67ddf9bf35ccd805,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689621614583052007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f0969feab8ff61a3ecabd,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f43a47cc4914603113543b064ad6f73a8fb118885e3f90d15fcf0f7e9e537b,PodSandboxId:94029e518450211f5ebf3158370a29eaa1b3cd8232d0f71d937f273c308a0eef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689621614313190185,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 306844e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b1cf23ee-84f1-468e-bafb-eccdf300c79c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.068794339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ab1672c9-642f-4691-bce7-82406ee18d43 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.068972938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ab1672c9-642f-4691-bce7-82406ee18d43 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.069235387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa76b01e90d6176fd5bb3bb5637ce37ebc894a82f66dafa3121ae309b6d3af7a,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621652494682253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3b0ae9960e9aa82cebdd325142d6afabfa0dac7b02a58ba60b97744c9ca348,PodSandboxId:07a4750146b59e874e72c1dd833a1e9596f166f8b6d0d23e7ad1ad9621eb1088,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621630086056582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3193dffaf98a7432e9de2d2464bdfd2a7d41c53f4037f7e269bba581af9383,PodSandboxId:23bb7e83cd76790ac6bfd910be73fffe228508d985ed8cf05614cffd69ad53cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621628855534086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2736b5d4a342f2ac3d273066eac0f67e30169eb7c1f4d409ff79e15ed8bde8fd,PodSandboxId:b07bda47474b4aed62b046741591de768f5419e9d065683be986bc697a2c6ef3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621623704789843,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6cf4cd85bc334268040ca65da0dbd4ccc640a71d29ce44982ab3b8387eba6c1,PodSandboxId:d0294f361d339881e77c9ae8b19f6dfa4b674d33984e9b974592dfb52998cab5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621621393812337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa98acb609735e9c234208fe3e95dce147776f50db39a85536a2c6b58351ea35,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689621621252219050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d
8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdaab08279a165cb8f5bc9dcfb35d4bc25e1ae6218a7993b583ad588c4b38fa,PodSandboxId:e69d1b9180452baced7178c4ea1bab8b95e1aa3778740253feed4bc0f8f7ca3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689621615033920505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cf0d9749,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7b38e320c90c2756a9ba7c268250d0aa287f84c13cf3e2575b69b2a2cd704f1,PodSandboxId:1e56bd8d3694ecb2d196b820a73eb1755ee6c875e1ffdf7e3dffa54063489931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689621614825188558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd3855d96bf48583c05bf555f0dc2582eda3385ab563cee3c7568005b793d21,PodSandboxId:ce949d273e823908e1178147515474da84a9ee536185b82a67ddf9bf35ccd805,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689621614583052007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f0969feab8ff61a3ecabd,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f43a47cc4914603113543b064ad6f73a8fb118885e3f90d15fcf0f7e9e537b,PodSandboxId:94029e518450211f5ebf3158370a29eaa1b3cd8232d0f71d937f273c308a0eef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689621614313190185,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 306844e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ab1672c9-642f-4691-bce7-82406ee18d43 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.110946699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=867e1e15-b853-47d9-a474-cfb74af76233 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.111014714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=867e1e15-b853-47d9-a474-cfb74af76233 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.111269602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa76b01e90d6176fd5bb3bb5637ce37ebc894a82f66dafa3121ae309b6d3af7a,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621652494682253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3b0ae9960e9aa82cebdd325142d6afabfa0dac7b02a58ba60b97744c9ca348,PodSandboxId:07a4750146b59e874e72c1dd833a1e9596f166f8b6d0d23e7ad1ad9621eb1088,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621630086056582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3193dffaf98a7432e9de2d2464bdfd2a7d41c53f4037f7e269bba581af9383,PodSandboxId:23bb7e83cd76790ac6bfd910be73fffe228508d985ed8cf05614cffd69ad53cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621628855534086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2736b5d4a342f2ac3d273066eac0f67e30169eb7c1f4d409ff79e15ed8bde8fd,PodSandboxId:b07bda47474b4aed62b046741591de768f5419e9d065683be986bc697a2c6ef3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621623704789843,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6cf4cd85bc334268040ca65da0dbd4ccc640a71d29ce44982ab3b8387eba6c1,PodSandboxId:d0294f361d339881e77c9ae8b19f6dfa4b674d33984e9b974592dfb52998cab5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621621393812337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa98acb609735e9c234208fe3e95dce147776f50db39a85536a2c6b58351ea35,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689621621252219050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d
8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdaab08279a165cb8f5bc9dcfb35d4bc25e1ae6218a7993b583ad588c4b38fa,PodSandboxId:e69d1b9180452baced7178c4ea1bab8b95e1aa3778740253feed4bc0f8f7ca3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689621615033920505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cf0d9749,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7b38e320c90c2756a9ba7c268250d0aa287f84c13cf3e2575b69b2a2cd704f1,PodSandboxId:1e56bd8d3694ecb2d196b820a73eb1755ee6c875e1ffdf7e3dffa54063489931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689621614825188558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd3855d96bf48583c05bf555f0dc2582eda3385ab563cee3c7568005b793d21,PodSandboxId:ce949d273e823908e1178147515474da84a9ee536185b82a67ddf9bf35ccd805,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689621614583052007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f0969feab8ff61a3ecabd,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f43a47cc4914603113543b064ad6f73a8fb118885e3f90d15fcf0f7e9e537b,PodSandboxId:94029e518450211f5ebf3158370a29eaa1b3cd8232d0f71d937f273c308a0eef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689621614313190185,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 306844e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=867e1e15-b853-47d9-a474-cfb74af76233 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.150915483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4b72eb08-1a1a-45c5-be9a-9d9b18a54c7a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.150998946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4b72eb08-1a1a-45c5-be9a-9d9b18a54c7a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.151266173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa76b01e90d6176fd5bb3bb5637ce37ebc894a82f66dafa3121ae309b6d3af7a,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621652494682253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3b0ae9960e9aa82cebdd325142d6afabfa0dac7b02a58ba60b97744c9ca348,PodSandboxId:07a4750146b59e874e72c1dd833a1e9596f166f8b6d0d23e7ad1ad9621eb1088,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621630086056582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3193dffaf98a7432e9de2d2464bdfd2a7d41c53f4037f7e269bba581af9383,PodSandboxId:23bb7e83cd76790ac6bfd910be73fffe228508d985ed8cf05614cffd69ad53cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621628855534086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2736b5d4a342f2ac3d273066eac0f67e30169eb7c1f4d409ff79e15ed8bde8fd,PodSandboxId:b07bda47474b4aed62b046741591de768f5419e9d065683be986bc697a2c6ef3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621623704789843,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6cf4cd85bc334268040ca65da0dbd4ccc640a71d29ce44982ab3b8387eba6c1,PodSandboxId:d0294f361d339881e77c9ae8b19f6dfa4b674d33984e9b974592dfb52998cab5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621621393812337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa98acb609735e9c234208fe3e95dce147776f50db39a85536a2c6b58351ea35,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689621621252219050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d
8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdaab08279a165cb8f5bc9dcfb35d4bc25e1ae6218a7993b583ad588c4b38fa,PodSandboxId:e69d1b9180452baced7178c4ea1bab8b95e1aa3778740253feed4bc0f8f7ca3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689621615033920505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cf0d9749,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7b38e320c90c2756a9ba7c268250d0aa287f84c13cf3e2575b69b2a2cd704f1,PodSandboxId:1e56bd8d3694ecb2d196b820a73eb1755ee6c875e1ffdf7e3dffa54063489931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689621614825188558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd3855d96bf48583c05bf555f0dc2582eda3385ab563cee3c7568005b793d21,PodSandboxId:ce949d273e823908e1178147515474da84a9ee536185b82a67ddf9bf35ccd805,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689621614583052007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f0969feab8ff61a3ecabd,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f43a47cc4914603113543b064ad6f73a8fb118885e3f90d15fcf0f7e9e537b,PodSandboxId:94029e518450211f5ebf3158370a29eaa1b3cd8232d0f71d937f273c308a0eef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689621614313190185,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 306844e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4b72eb08-1a1a-45c5-be9a-9d9b18a54c7a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.156054359Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=968da617-e1f6-4cd1-8337-31c3aa53cc68 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.156287382Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:07a4750146b59e874e72c1dd833a1e9596f166f8b6d0d23e7ad1ad9621eb1088,Metadata:&PodSandboxMetadata{Name:busybox-67b7f59bb-jgj4t,Uid:fe524d58-c36b-41da-82eb-f0336652f7c2,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689621628109018635,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,pod-template-hash: 67b7f59bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T19:20:20.210419838Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:23bb7e83cd76790ac6bfd910be73fffe228508d985ed8cf05614cffd69ad53cb,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-wqj4s,Uid:a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,Namespace:kube-system,Attempt:0,},
State:SANDBOX_READY,CreatedAt:1689621628103466923,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T19:20:20.210421398Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:bd46cf29-49d3-4c0a-908e-a323a525d8d5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689621620595067119,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]strin
g{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-07-17T19:20:20.210429100Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d0294f361d339881e77c9ae8b19f6dfa4b674d33984e9b974592dfb52998cab5,Metadata:&PodSandboxMetadata{Name:kube-proxy-qwsn5,Uid:50e3f5e0-00d9-4412-b4de-649bc29733e9,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1689621620590045849,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29733e9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T19:20:20.210426246Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b07bda47474b4aed62b046741591de768f5419e9d065683be986bc697a2c6ef3,Metadata:&PodSandboxMetadata{Name:kindnet-2tp5c,Uid:4e4881b0-4a20-4588-a87b-d2ba9c9b6939,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689621620544620760,Labels:map[string]string{app: kindnet,controller-revision-hash: 575d9d6996,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,k8s-app: kindnet,pod-template-generati
on: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T19:20:20.210423770Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce949d273e823908e1178147515474da84a9ee536185b82a67ddf9bf35ccd805,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-464644,Uid:323b8f41b30f0969feab8ff61a3ecabd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689621613767486187,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f0969feab8ff61a3ecabd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 323b8f41b30f0969feab8ff61a3ecabd,kubernetes.io/config.seen: 2023-07-17T19:20:13.206275749Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e69d1b9180452baced7178c4ea1bab8b95e1aa3778740253feed4bc0f8f7ca3d,Metadata:&PodSandboxMetadata
{Name:etcd-multinode-464644,Uid:d5b599b6912e0d4b30d78bf7b7e52672,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689621613756674820,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.174:2379,kubernetes.io/config.hash: d5b599b6912e0d4b30d78bf7b7e52672,kubernetes.io/config.seen: 2023-07-17T19:20:13.206281788Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1e56bd8d3694ecb2d196b820a73eb1755ee6c875e1ffdf7e3dffa54063489931,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-464644,Uid:6435a6b37c43f83175753c4199c85407,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689621613731433002,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6435a6b37c43f83175753c4199c85407,kubernetes.io/config.seen: 2023-07-17T19:20:13.206280752Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:94029e518450211f5ebf3158370a29eaa1b3cd8232d0f71d937f273c308a0eef,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-464644,Uid:b280034e13df00701aec7afc575fcc6c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689621613726716829,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.174:8443,kubernet
es.io/config.hash: b280034e13df00701aec7afc575fcc6c,kubernetes.io/config.seen: 2023-07-17T19:20:13.206283241Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=968da617-e1f6-4cd1-8337-31c3aa53cc68 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.157866776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=31ab0f3c-baa6-4992-b36a-de31131dfb0a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.157920889Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=31ab0f3c-baa6-4992-b36a-de31131dfb0a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.158426460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa76b01e90d6176fd5bb3bb5637ce37ebc894a82f66dafa3121ae309b6d3af7a,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621652494682253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3b0ae9960e9aa82cebdd325142d6afabfa0dac7b02a58ba60b97744c9ca348,PodSandboxId:07a4750146b59e874e72c1dd833a1e9596f166f8b6d0d23e7ad1ad9621eb1088,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621630086056582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3193dffaf98a7432e9de2d2464bdfd2a7d41c53f4037f7e269bba581af9383,PodSandboxId:23bb7e83cd76790ac6bfd910be73fffe228508d985ed8cf05614cffd69ad53cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621628855534086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2736b5d4a342f2ac3d273066eac0f67e30169eb7c1f4d409ff79e15ed8bde8fd,PodSandboxId:b07bda47474b4aed62b046741591de768f5419e9d065683be986bc697a2c6ef3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621623704789843,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6cf4cd85bc334268040ca65da0dbd4ccc640a71d29ce44982ab3b8387eba6c1,PodSandboxId:d0294f361d339881e77c9ae8b19f6dfa4b674d33984e9b974592dfb52998cab5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621621393812337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa98acb609735e9c234208fe3e95dce147776f50db39a85536a2c6b58351ea35,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689621621252219050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d
8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdaab08279a165cb8f5bc9dcfb35d4bc25e1ae6218a7993b583ad588c4b38fa,PodSandboxId:e69d1b9180452baced7178c4ea1bab8b95e1aa3778740253feed4bc0f8f7ca3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689621615033920505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cf0d9749,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7b38e320c90c2756a9ba7c268250d0aa287f84c13cf3e2575b69b2a2cd704f1,PodSandboxId:1e56bd8d3694ecb2d196b820a73eb1755ee6c875e1ffdf7e3dffa54063489931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689621614825188558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd3855d96bf48583c05bf555f0dc2582eda3385ab563cee3c7568005b793d21,PodSandboxId:ce949d273e823908e1178147515474da84a9ee536185b82a67ddf9bf35ccd805,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689621614583052007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f0969feab8ff61a3ecabd,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f43a47cc4914603113543b064ad6f73a8fb118885e3f90d15fcf0f7e9e537b,PodSandboxId:94029e518450211f5ebf3158370a29eaa1b3cd8232d0f71d937f273c308a0eef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689621614313190185,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 306844e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=31ab0f3c-baa6-4992-b36a-de31131dfb0a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.191290724Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=696d5fef-4c66-47b6-b455-0c33ab2437a4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.191512311Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=696d5fef-4c66-47b6-b455-0c33ab2437a4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.191769130Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa76b01e90d6176fd5bb3bb5637ce37ebc894a82f66dafa3121ae309b6d3af7a,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621652494682253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3b0ae9960e9aa82cebdd325142d6afabfa0dac7b02a58ba60b97744c9ca348,PodSandboxId:07a4750146b59e874e72c1dd833a1e9596f166f8b6d0d23e7ad1ad9621eb1088,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621630086056582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3193dffaf98a7432e9de2d2464bdfd2a7d41c53f4037f7e269bba581af9383,PodSandboxId:23bb7e83cd76790ac6bfd910be73fffe228508d985ed8cf05614cffd69ad53cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621628855534086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2736b5d4a342f2ac3d273066eac0f67e30169eb7c1f4d409ff79e15ed8bde8fd,PodSandboxId:b07bda47474b4aed62b046741591de768f5419e9d065683be986bc697a2c6ef3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621623704789843,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6cf4cd85bc334268040ca65da0dbd4ccc640a71d29ce44982ab3b8387eba6c1,PodSandboxId:d0294f361d339881e77c9ae8b19f6dfa4b674d33984e9b974592dfb52998cab5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621621393812337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa98acb609735e9c234208fe3e95dce147776f50db39a85536a2c6b58351ea35,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689621621252219050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d
8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdaab08279a165cb8f5bc9dcfb35d4bc25e1ae6218a7993b583ad588c4b38fa,PodSandboxId:e69d1b9180452baced7178c4ea1bab8b95e1aa3778740253feed4bc0f8f7ca3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689621615033920505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cf0d9749,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7b38e320c90c2756a9ba7c268250d0aa287f84c13cf3e2575b69b2a2cd704f1,PodSandboxId:1e56bd8d3694ecb2d196b820a73eb1755ee6c875e1ffdf7e3dffa54063489931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689621614825188558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd3855d96bf48583c05bf555f0dc2582eda3385ab563cee3c7568005b793d21,PodSandboxId:ce949d273e823908e1178147515474da84a9ee536185b82a67ddf9bf35ccd805,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689621614583052007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f0969feab8ff61a3ecabd,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f43a47cc4914603113543b064ad6f73a8fb118885e3f90d15fcf0f7e9e537b,PodSandboxId:94029e518450211f5ebf3158370a29eaa1b3cd8232d0f71d937f273c308a0eef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689621614313190185,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 306844e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=696d5fef-4c66-47b6-b455-0c33ab2437a4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.230117492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e909f497-7461-4d67-838c-05f56a10a1f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.230213739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e909f497-7461-4d67-838c-05f56a10a1f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.230536981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa76b01e90d6176fd5bb3bb5637ce37ebc894a82f66dafa3121ae309b6d3af7a,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621652494682253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3b0ae9960e9aa82cebdd325142d6afabfa0dac7b02a58ba60b97744c9ca348,PodSandboxId:07a4750146b59e874e72c1dd833a1e9596f166f8b6d0d23e7ad1ad9621eb1088,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621630086056582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3193dffaf98a7432e9de2d2464bdfd2a7d41c53f4037f7e269bba581af9383,PodSandboxId:23bb7e83cd76790ac6bfd910be73fffe228508d985ed8cf05614cffd69ad53cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621628855534086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2736b5d4a342f2ac3d273066eac0f67e30169eb7c1f4d409ff79e15ed8bde8fd,PodSandboxId:b07bda47474b4aed62b046741591de768f5419e9d065683be986bc697a2c6ef3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621623704789843,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6cf4cd85bc334268040ca65da0dbd4ccc640a71d29ce44982ab3b8387eba6c1,PodSandboxId:d0294f361d339881e77c9ae8b19f6dfa4b674d33984e9b974592dfb52998cab5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621621393812337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa98acb609735e9c234208fe3e95dce147776f50db39a85536a2c6b58351ea35,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689621621252219050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d
8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdaab08279a165cb8f5bc9dcfb35d4bc25e1ae6218a7993b583ad588c4b38fa,PodSandboxId:e69d1b9180452baced7178c4ea1bab8b95e1aa3778740253feed4bc0f8f7ca3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689621615033920505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cf0d9749,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7b38e320c90c2756a9ba7c268250d0aa287f84c13cf3e2575b69b2a2cd704f1,PodSandboxId:1e56bd8d3694ecb2d196b820a73eb1755ee6c875e1ffdf7e3dffa54063489931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689621614825188558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd3855d96bf48583c05bf555f0dc2582eda3385ab563cee3c7568005b793d21,PodSandboxId:ce949d273e823908e1178147515474da84a9ee536185b82a67ddf9bf35ccd805,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689621614583052007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f0969feab8ff61a3ecabd,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f43a47cc4914603113543b064ad6f73a8fb118885e3f90d15fcf0f7e9e537b,PodSandboxId:94029e518450211f5ebf3158370a29eaa1b3cd8232d0f71d937f273c308a0eef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689621614313190185,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 306844e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e909f497-7461-4d67-838c-05f56a10a1f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.269478160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bd0fc333-de6c-4cec-b5cd-8ebcdfedf062 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.269582611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bd0fc333-de6c-4cec-b5cd-8ebcdfedf062 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:23:58 multinode-464644 crio[714]: time="2023-07-17 19:23:58.269840571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fa76b01e90d6176fd5bb3bb5637ce37ebc894a82f66dafa3121ae309b6d3af7a,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689621652494682253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3b0ae9960e9aa82cebdd325142d6afabfa0dac7b02a58ba60b97744c9ca348,PodSandboxId:07a4750146b59e874e72c1dd833a1e9596f166f8b6d0d23e7ad1ad9621eb1088,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689621630086056582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-jgj4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe524d58-c36b-41da-82eb-f0336652f7c2,},Annotations:map[string]string{io.kubernetes.container.hash: 68a60442,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3193dffaf98a7432e9de2d2464bdfd2a7d41c53f4037f7e269bba581af9383,PodSandboxId:23bb7e83cd76790ac6bfd910be73fffe228508d985ed8cf05614cffd69ad53cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689621628855534086,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqj4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991,},Annotations:map[string]string{io.kubernetes.container.hash: 6953278b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2736b5d4a342f2ac3d273066eac0f67e30169eb7c1f4d409ff79e15ed8bde8fd,PodSandboxId:b07bda47474b4aed62b046741591de768f5419e9d065683be986bc697a2c6ef3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689621623704789843,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2tp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 4e4881b0-4a20-4588-a87b-d2ba9c9b6939,},Annotations:map[string]string{io.kubernetes.container.hash: 711d75fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6cf4cd85bc334268040ca65da0dbd4ccc640a71d29ce44982ab3b8387eba6c1,PodSandboxId:d0294f361d339881e77c9ae8b19f6dfa4b674d33984e9b974592dfb52998cab5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689621621393812337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qwsn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e3f5e0-00d9-4412-b4de-649bc29
733e9,},Annotations:map[string]string{io.kubernetes.container.hash: ff6af5cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa98acb609735e9c234208fe3e95dce147776f50db39a85536a2c6b58351ea35,PodSandboxId:4a322d5933a9b29355876fd74c22a5785e1ced08bcd2383e55d95cb7458a500d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689621621252219050,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd46cf29-49d3-4c0a-908e-a323a525d
8d5,},Annotations:map[string]string{io.kubernetes.container.hash: c2da66c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fdaab08279a165cb8f5bc9dcfb35d4bc25e1ae6218a7993b583ad588c4b38fa,PodSandboxId:e69d1b9180452baced7178c4ea1bab8b95e1aa3778740253feed4bc0f8f7ca3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689621615033920505,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5b599b6912e0d4b30d78bf7b7e52672,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cf0d9749,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7b38e320c90c2756a9ba7c268250d0aa287f84c13cf3e2575b69b2a2cd704f1,PodSandboxId:1e56bd8d3694ecb2d196b820a73eb1755ee6c875e1ffdf7e3dffa54063489931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689621614825188558,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435a6b37c43f83175753c4199c85407,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd3855d96bf48583c05bf555f0dc2582eda3385ab563cee3c7568005b793d21,PodSandboxId:ce949d273e823908e1178147515474da84a9ee536185b82a67ddf9bf35ccd805,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689621614583052007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 323b8f41b30f0969feab8ff61a3ecabd,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1f43a47cc4914603113543b064ad6f73a8fb118885e3f90d15fcf0f7e9e537b,PodSandboxId:94029e518450211f5ebf3158370a29eaa1b3cd8232d0f71d937f273c308a0eef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689621614313190185,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-464644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b280034e13df00701aec7afc575fcc6c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 306844e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bd0fc333-de6c-4cec-b5cd-8ebcdfedf062 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	fa76b01e90d61       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   4a322d5933a9b
	ae3b0ae9960e9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   07a4750146b59
	2e3193dffaf98       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   23bb7e83cd767
	2736b5d4a342f       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      3 minutes ago       Running             kindnet-cni               1                   b07bda47474b4
	e6cf4cd85bc33       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      3 minutes ago       Running             kube-proxy                1                   d0294f361d339
	aa98acb609735       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   4a322d5933a9b
	1fdaab08279a1       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      3 minutes ago       Running             etcd                      1                   e69d1b9180452
	f7b38e320c90c       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      3 minutes ago       Running             kube-scheduler            1                   1e56bd8d3694e
	7bd3855d96bf4       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      3 minutes ago       Running             kube-controller-manager   1                   ce949d273e823
	c1f43a47cc491       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      3 minutes ago       Running             kube-apiserver            1                   94029e5184502
	
	* 
	* ==> coredns [2e3193dffaf98a7432e9de2d2464bdfd2a7d41c53f4037f7e269bba581af9383] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41264 - 36745 "HINFO IN 750999961416934747.5904006441574971177. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012998548s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-464644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-464644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=multinode-464644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T19_09_55_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:09:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-464644
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 19:23:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 19:20:50 +0000   Mon, 17 Jul 2023 19:09:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 19:20:50 +0000   Mon, 17 Jul 2023 19:09:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 19:20:50 +0000   Mon, 17 Jul 2023 19:09:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 19:20:50 +0000   Mon, 17 Jul 2023 19:20:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    multinode-464644
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 87d391e2c653469e8471a8f89fe7ad1d
	  System UUID:                87d391e2-c653-469e-8471-a8f89fe7ad1d
	  Boot ID:                    452ca2ab-3668-4e2b-93ce-47dcfefbe206
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-jgj4t                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5d78c9869d-wqj4s                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-464644                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-2tp5c                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-464644             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-464644    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-qwsn5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-464644             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m36s                  kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-464644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-464644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-464644 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-464644 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-464644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-464644 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-464644 event: Registered Node multinode-464644 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-464644 status is now: NodeReady
	  Normal  Starting                 3m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m45s (x8 over 3m45s)  kubelet          Node multinode-464644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m45s (x8 over 3m45s)  kubelet          Node multinode-464644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m45s (x7 over 3m45s)  kubelet          Node multinode-464644 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m26s                  node-controller  Node multinode-464644 event: Registered Node multinode-464644 in Controller
	
	
	Name:               multinode-464644-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-464644-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:22:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-464644-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 19:23:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 19:22:12 +0000   Mon, 17 Jul 2023 19:22:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 19:22:12 +0000   Mon, 17 Jul 2023 19:22:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 19:22:12 +0000   Mon, 17 Jul 2023 19:22:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 19:22:12 +0000   Mon, 17 Jul 2023 19:22:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.49
	  Hostname:    multinode-464644-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 abd9ef3b2307468e830a565575aeab4d
	  System UUID:                abd9ef3b-2307-468e-830a-565575aeab4d
	  Boot ID:                    c2528652-f898-4c51-b8a2-3ef727bc0aaa
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-2697q    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-t77xh              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-j6ds6           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From        Message
	  ----     ------                   ----                 ----        -------
	  Normal   Starting                 13m                  kube-proxy  
	  Normal   Starting                 104s                 kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)    kubelet     Node multinode-464644-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)    kubelet     Node multinode-464644-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)    kubelet     Node multinode-464644-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                  kubelet     Node multinode-464644-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m57s                kubelet     Node multinode-464644-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m8s (x2 over 3m8s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 106s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  106s (x2 over 106s)  kubelet     Node multinode-464644-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x2 over 106s)  kubelet     Node multinode-464644-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x2 over 106s)  kubelet     Node multinode-464644-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  106s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                106s                 kubelet     Node multinode-464644-m02 status is now: NodeReady
	
	
	Name:               multinode-464644-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-464644-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:23:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-464644-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 19:23:53 +0000   Mon, 17 Jul 2023 19:23:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 19:23:53 +0000   Mon, 17 Jul 2023 19:23:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 19:23:53 +0000   Mon, 17 Jul 2023 19:23:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 19:23:53 +0000   Mon, 17 Jul 2023 19:23:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    multinode-464644-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5d0efa984a14b2b871879029390eb17
	  System UUID:                f5d0efa9-84a1-4b2b-8718-79029390eb17
	  Boot ID:                    1c9a4f36-1942-43e7-ab48-ca259639ad68
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-ftsv6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kindnet-znndf              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-56qvt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-464644-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-464644-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-464644-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-464644-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-464644-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-464644-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-464644-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-464644-m03 status is now: NodeReady
	  Normal   NodeNotReady             65s                kubelet     Node multinode-464644-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        35s (x2 over 95s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-464644-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-464644-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-464644-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-464644-m03 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [Jul17 19:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074140] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.430503] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.918213] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154979] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.704095] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.672820] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.109363] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.151829] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.097345] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.234989] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[Jul17 19:20] systemd-fstab-generator[914]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [1fdaab08279a165cb8f5bc9dcfb35d4bc25e1ae6218a7993b583ad588c4b38fa] <==
	* {"level":"info","ts":"2023-07-17T19:20:17.197Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T19:20:17.197Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T19:20:17.197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 switched to configuration voters=(8283008283800597511)"}
	{"level":"info","ts":"2023-07-17T19:20:17.198Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","added-peer-id":"72f328261b8d7407","added-peer-peer-urls":["https://192.168.39.174:2380"]}
	{"level":"info","ts":"2023-07-17T19:20:17.198Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:20:17.198Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:20:17.220Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2023-07-17T19:20:17.220Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2023-07-17T19:20:17.220Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T19:20:17.221Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T19:20:17.221Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"72f328261b8d7407","initial-advertise-peer-urls":["https://192.168.39.174:2380"],"listen-peer-urls":["https://192.168.39.174:2380"],"advertise-client-urls":["https://192.168.39.174:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.174:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T19:20:18.341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-17T19:20:18.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-17T19:20:18.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgPreVoteResp from 72f328261b8d7407 at term 2"}
	{"level":"info","ts":"2023-07-17T19:20:18.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became candidate at term 3"}
	{"level":"info","ts":"2023-07-17T19:20:18.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgVoteResp from 72f328261b8d7407 at term 3"}
	{"level":"info","ts":"2023-07-17T19:20:18.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became leader at term 3"}
	{"level":"info","ts":"2023-07-17T19:20:18.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 72f328261b8d7407 elected leader 72f328261b8d7407 at term 3"}
	{"level":"info","ts":"2023-07-17T19:20:18.346Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"72f328261b8d7407","local-member-attributes":"{Name:multinode-464644 ClientURLs:[https://192.168.39.174:2379]}","request-path":"/0/members/72f328261b8d7407/attributes","cluster-id":"3f65b9220f75d9a5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T19:20:18.347Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T19:20:18.348Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T19:20:18.348Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T19:20:18.350Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.174:2379"}
	{"level":"info","ts":"2023-07-17T19:20:18.350Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T19:20:18.360Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:23:58 up 4 min,  0 users,  load average: 0.47, 0.31, 0.13
	Linux multinode-464644 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [2736b5d4a342f2ac3d273066eac0f67e30169eb7c1f4d409ff79e15ed8bde8fd] <==
	* I0717 19:23:25.546254       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0717 19:23:25.546307       1 main.go:227] handling current node
	I0717 19:23:25.546319       1 main.go:223] Handling node with IPs: map[192.168.39.49:{}]
	I0717 19:23:25.546325       1 main.go:250] Node multinode-464644-m02 has CIDR [10.244.1.0/24] 
	I0717 19:23:25.546496       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I0717 19:23:25.546530       1 main.go:250] Node multinode-464644-m03 has CIDR [10.244.3.0/24] 
	I0717 19:23:35.557240       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0717 19:23:35.557446       1 main.go:227] handling current node
	I0717 19:23:35.557473       1 main.go:223] Handling node with IPs: map[192.168.39.49:{}]
	I0717 19:23:35.557487       1 main.go:250] Node multinode-464644-m02 has CIDR [10.244.1.0/24] 
	I0717 19:23:35.557628       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I0717 19:23:35.557680       1 main.go:250] Node multinode-464644-m03 has CIDR [10.244.3.0/24] 
	I0717 19:23:45.565481       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0717 19:23:45.565541       1 main.go:227] handling current node
	I0717 19:23:45.565567       1 main.go:223] Handling node with IPs: map[192.168.39.49:{}]
	I0717 19:23:45.565574       1 main.go:250] Node multinode-464644-m02 has CIDR [10.244.1.0/24] 
	I0717 19:23:45.565697       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I0717 19:23:45.565737       1 main.go:250] Node multinode-464644-m03 has CIDR [10.244.3.0/24] 
	I0717 19:23:55.583189       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0717 19:23:55.583297       1 main.go:227] handling current node
	I0717 19:23:55.583407       1 main.go:223] Handling node with IPs: map[192.168.39.49:{}]
	I0717 19:23:55.583421       1 main.go:250] Node multinode-464644-m02 has CIDR [10.244.1.0/24] 
	I0717 19:23:55.583725       1 main.go:223] Handling node with IPs: map[192.168.39.247:{}]
	I0717 19:23:55.583771       1 main.go:250] Node multinode-464644-m03 has CIDR [10.244.2.0/24] 
	I0717 19:23:55.583852       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.247 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [c1f43a47cc4914603113543b064ad6f73a8fb118885e3f90d15fcf0f7e9e537b] <==
	* I0717 19:20:22.752247       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 19:20:22.844048       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 19:20:22.858204       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	E0717 19:20:29.986208       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[node-high system workload-high workload-low catch-all global-default leader-election] items=[{target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625}]
	E0717 19:20:39.987628       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[leader-election node-high system workload-high workload-low catch-all global-default] items=[{target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649}]
	E0717 19:20:49.988689       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[global-default leader-election node-high system workload-high workload-low catch-all] items=[{target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613}]
	E0717 19:20:59.989654       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[system workload-high workload-low catch-all global-default leader-election node-high] items=[{target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698}]
	E0717 19:21:09.990471       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[system workload-high workload-low catch-all global-default leader-election node-high] items=[{target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698}]
	I0717 19:21:10.038118       1 controller.go:624] quota admission added evaluator for: endpoints
	E0717 19:21:19.990967       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[node-high system workload-high workload-low catch-all global-default leader-election] items=[{target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625}]
	E0717 19:21:29.991312       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[node-high system workload-high workload-low catch-all global-default leader-election] items=[{target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625}]
	E0717 19:21:39.992318       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[global-default leader-election node-high system workload-high workload-low catch-all] items=[{target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613}]
	E0717 19:21:49.993822       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[leader-election node-high system workload-high workload-low catch-all global-default] items=[{target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649}]
	E0717 19:21:59.994518       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[workload-low catch-all global-default leader-election node-high system workload-high] items=[{target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698}]
	E0717 19:22:09.995722       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[node-high system workload-high workload-low catch-all global-default leader-election] items=[{target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625}]
	E0717 19:22:19.996714       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[leader-election node-high system workload-high workload-low catch-all global-default] items=[{target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649}]
	E0717 19:22:29.997463       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[global-default leader-election node-high system workload-high workload-low catch-all] items=[{target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613}]
	E0717 19:22:39.998628       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[system workload-high workload-low catch-all global-default leader-election node-high] items=[{target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698}]
	E0717 19:22:50.001253       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[system workload-high workload-low catch-all global-default leader-election node-high] items=[{target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698}]
	E0717 19:23:00.001755       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[global-default leader-election node-high system workload-high workload-low catch-all] items=[{target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613}]
	E0717 19:23:10.003181       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[system workload-high workload-low catch-all global-default leader-election node-high] items=[{target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698}]
	E0717 19:23:20.003846       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[global-default leader-election node-high system workload-high workload-low catch-all] items=[{target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613}]
	E0717 19:23:30.004926       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[workload-low catch-all global-default leader-election node-high system workload-high] items=[{target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698}]
	E0717 19:23:40.006025       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[workload-low catch-all global-default leader-election node-high system workload-high] items=[{target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698}]
	E0717 19:23:50.006879       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[workload-low catch-all global-default leader-election node-high system workload-high] items=[{target:24 lowerBound:24 upperBound:845} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:649} {target:25 lowerBound:25 upperBound:625} {target:73 lowerBound:73 upperBound:698} {target:50 lowerBound:50 upperBound:674} {target:49 lowerBound:49 upperBound:698}]
	
	* 
	* ==> kube-controller-manager [7bd3855d96bf48583c05bf555f0dc2582eda3385ab563cee3c7568005b793d21] <==
	* I0717 19:20:32.689862       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0717 19:20:32.702460       1 shared_informer.go:318] Caches are synced for PVC protection
	I0717 19:20:32.702560       1 shared_informer.go:318] Caches are synced for job
	I0717 19:20:32.712054       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 19:20:32.712187       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 19:20:32.712294       1 shared_informer.go:318] Caches are synced for ephemeral
	I0717 19:20:33.015986       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 19:20:33.016083       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 19:20:33.061827       1 shared_informer.go:318] Caches are synced for garbage collector
	W0717 19:21:01.171017       1 topologycache.go:232] Can't get CPU or zone information for multinode-464644-m03 node
	I0717 19:22:08.445174       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-ftsv6"
	W0717 19:22:11.451012       1 topologycache.go:232] Can't get CPU or zone information for multinode-464644-m03 node
	I0717 19:22:12.146714       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-464644-m02\" does not exist"
	W0717 19:22:12.146798       1 topologycache.go:232] Can't get CPU or zone information for multinode-464644-m03 node
	I0717 19:22:12.147545       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-bjpl2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-bjpl2"
	I0717 19:22:12.172747       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-464644-m02" podCIDRs=[10.244.1.0/24]
	W0717 19:22:12.224069       1 topologycache.go:232] Can't get CPU or zone information for multinode-464644-m02 node
	W0717 19:22:53.323940       1 topologycache.go:232] Can't get CPU or zone information for multinode-464644-m02 node
	I0717 19:23:49.726680       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-2697q"
	W0717 19:23:52.737231       1 topologycache.go:232] Can't get CPU or zone information for multinode-464644-m02 node
	I0717 19:23:53.465936       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-464644-m03\" does not exist"
	W0717 19:23:53.466153       1 topologycache.go:232] Can't get CPU or zone information for multinode-464644-m02 node
	I0717 19:23:53.467271       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-ftsv6" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-ftsv6"
	I0717 19:23:53.484466       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-464644-m03" podCIDRs=[10.244.2.0/24]
	W0717 19:23:53.561219       1 topologycache.go:232] Can't get CPU or zone information for multinode-464644-m02 node
	
	* 
	* ==> kube-proxy [e6cf4cd85bc334268040ca65da0dbd4ccc640a71d29ce44982ab3b8387eba6c1] <==
	* I0717 19:20:21.685244       1 node.go:141] Successfully retrieved node IP: 192.168.39.174
	I0717 19:20:21.685451       1 server_others.go:110] "Detected node IP" address="192.168.39.174"
	I0717 19:20:21.685538       1 server_others.go:554] "Using iptables proxy"
	I0717 19:20:21.753588       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 19:20:21.753663       1 server_others.go:192] "Using iptables Proxier"
	I0717 19:20:21.753716       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:20:21.754086       1 server.go:658] "Version info" version="v1.27.3"
	I0717 19:20:21.754250       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:20:21.755104       1 config.go:188] "Starting service config controller"
	I0717 19:20:21.755158       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 19:20:21.755190       1 config.go:97] "Starting endpoint slice config controller"
	I0717 19:20:21.755206       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 19:20:21.756878       1 config.go:315] "Starting node config controller"
	I0717 19:20:21.756920       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 19:20:21.883410       1 shared_informer.go:318] Caches are synced for node config
	I0717 19:20:21.883468       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 19:20:21.890546       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [f7b38e320c90c2756a9ba7c268250d0aa287f84c13cf3e2575b69b2a2cd704f1] <==
	* I0717 19:20:17.734785       1 serving.go:348] Generated self-signed cert in-memory
	W0717 19:20:19.956994       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 19:20:19.957081       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:20:19.957110       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 19:20:19.957134       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 19:20:19.994037       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 19:20:19.994200       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:20:19.998069       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:20:19.998147       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:20:20.000101       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 19:20:20.000168       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 19:20:20.099676       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 19:19:46 UTC, ends at Mon 2023-07-17 19:23:58 UTC. --
	Jul 17 19:20:21 multinode-464644 kubelet[920]: E0717 19:20:21.993844     920 projected.go:198] Error preparing data for projected volume kube-api-access-n97jv for pod default/busybox-67b7f59bb-jgj4t: object "default"/"kube-root-ca.crt" not registered
	Jul 17 19:20:21 multinode-464644 kubelet[920]: E0717 19:20:21.993895     920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fe524d58-c36b-41da-82eb-f0336652f7c2-kube-api-access-n97jv podName:fe524d58-c36b-41da-82eb-f0336652f7c2 nodeName:}" failed. No retries permitted until 2023-07-17 19:20:23.993882486 +0000 UTC m=+11.030762853 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n97jv" (UniqueName: "kubernetes.io/projected/fe524d58-c36b-41da-82eb-f0336652f7c2-kube-api-access-n97jv") pod "busybox-67b7f59bb-jgj4t" (UID: "fe524d58-c36b-41da-82eb-f0336652f7c2") : object "default"/"kube-root-ca.crt" not registered
	Jul 17 19:20:22 multinode-464644 kubelet[920]: E0717 19:20:22.249038     920 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5d78c9869d-wqj4s" podUID=a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991
	Jul 17 19:20:22 multinode-464644 kubelet[920]: E0717 19:20:22.249194     920 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-67b7f59bb-jgj4t" podUID=fe524d58-c36b-41da-82eb-f0336652f7c2
	Jul 17 19:20:23 multinode-464644 kubelet[920]: E0717 19:20:23.908542     920 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 17 19:20:23 multinode-464644 kubelet[920]: E0717 19:20:23.908652     920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991-config-volume podName:a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991 nodeName:}" failed. No retries permitted until 2023-07-17 19:20:27.908635677 +0000 UTC m=+14.945516045 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991-config-volume") pod "coredns-5d78c9869d-wqj4s" (UID: "a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991") : object "kube-system"/"coredns" not registered
	Jul 17 19:20:24 multinode-464644 kubelet[920]: E0717 19:20:24.009802     920 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jul 17 19:20:24 multinode-464644 kubelet[920]: E0717 19:20:24.009876     920 projected.go:198] Error preparing data for projected volume kube-api-access-n97jv for pod default/busybox-67b7f59bb-jgj4t: object "default"/"kube-root-ca.crt" not registered
	Jul 17 19:20:24 multinode-464644 kubelet[920]: E0717 19:20:24.009934     920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fe524d58-c36b-41da-82eb-f0336652f7c2-kube-api-access-n97jv podName:fe524d58-c36b-41da-82eb-f0336652f7c2 nodeName:}" failed. No retries permitted until 2023-07-17 19:20:28.009919933 +0000 UTC m=+15.046800301 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n97jv" (UniqueName: "kubernetes.io/projected/fe524d58-c36b-41da-82eb-f0336652f7c2-kube-api-access-n97jv") pod "busybox-67b7f59bb-jgj4t" (UID: "fe524d58-c36b-41da-82eb-f0336652f7c2") : object "default"/"kube-root-ca.crt" not registered
	Jul 17 19:20:24 multinode-464644 kubelet[920]: E0717 19:20:24.248326     920 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5d78c9869d-wqj4s" podUID=a642a4c6-9ebf-4ff2-a6ef-2653a3e0d991
	Jul 17 19:20:24 multinode-464644 kubelet[920]: E0717 19:20:24.248504     920 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-67b7f59bb-jgj4t" podUID=fe524d58-c36b-41da-82eb-f0336652f7c2
	Jul 17 19:20:25 multinode-464644 kubelet[920]: I0717 19:20:25.544841     920 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 17 19:20:52 multinode-464644 kubelet[920]: I0717 19:20:52.461137     920 scope.go:115] "RemoveContainer" containerID="aa98acb609735e9c234208fe3e95dce147776f50db39a85536a2c6b58351ea35"
	Jul 17 19:21:13 multinode-464644 kubelet[920]: E0717 19:21:13.270093     920 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 19:21:13 multinode-464644 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:21:13 multinode-464644 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:21:13 multinode-464644 kubelet[920]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 19:22:13 multinode-464644 kubelet[920]: E0717 19:22:13.284027     920 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 19:22:13 multinode-464644 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:22:13 multinode-464644 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:22:13 multinode-464644 kubelet[920]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 19:23:13 multinode-464644 kubelet[920]: E0717 19:23:13.272305     920 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 19:23:13 multinode-464644 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:23:13 multinode-464644 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:23:13 multinode-464644 kubelet[920]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-464644 -n multinode-464644
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-464644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (684.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 stop
E0717 19:26:00.134087 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-464644 stop: exit status 82 (2m1.338109876s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-464644"  ...
	* Stopping node "multinode-464644"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-464644 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 status
E0717 19:26:03.520271 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-464644 status: exit status 3 (18.711587994s)

                                                
                                                
-- stdout --
	multinode-464644
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-464644-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:26:21.894006 1087419 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host
	E0717 19:26:21.894062 1087419 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-464644 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-464644 -n multinode-464644
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-464644 -n multinode-464644: exit status 3 (3.181200549s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:26:25.254045 1087516 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host
	E0717 19:26:25.254067 1087516 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-464644" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.23s)

                                                
                                    
x
+
TestPreload (277.87s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-585582 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0717 19:36:00.134237 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 19:36:03.521605 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:36:04.378663 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-585582 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m16.045852972s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-585582 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-585582 image pull gcr.io/k8s-minikube/busybox: (1.159322061s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-585582
E0717 19:38:01.330433 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-585582: exit status 82 (2m0.931203024s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-585582"  ...
	* Stopping node "test-preload-585582"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-585582 failed: exit status 82
panic.go:522: *** TestPreload FAILED at 2023-07-17 19:39:05.122399242 +0000 UTC m=+3346.431094786
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-585582 -n test-preload-585582
E0717 19:39:06.569439 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-585582 -n test-preload-585582: exit status 3 (18.599399194s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:39:23.717960 1090465 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.8:22: connect: no route to host
	E0717 19:39:23.717984 1090465 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.8:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-585582" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-585582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-585582
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-585582: (1.134829835s)
--- FAIL: TestPreload (277.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (222.79s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.6.2.3820539245.exe start -p running-upgrade-585114 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.6.2.3820539245.exe start -p running-upgrade-585114 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m20.220559455s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-585114 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-585114 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m20.436225258s)

                                                
                                                
-- stdout --
	* [running-upgrade-585114] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the kvm2 driver based on existing profile
	* Downloading driver docker-machine-driver-kvm2:
	* Starting control plane node running-upgrade-585114 in cluster running-upgrade-585114
	* Updating the running kvm2 "running-upgrade-585114" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:45:31.480340 1096384 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:45:31.481027 1096384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:45:31.481045 1096384 out.go:309] Setting ErrFile to fd 2...
	I0717 19:45:31.481053 1096384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:45:31.481541 1096384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:45:31.483291 1096384 out.go:303] Setting JSON to false
	I0717 19:45:31.484505 1096384 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16083,"bootTime":1689607049,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:45:31.484596 1096384 start.go:138] virtualization: kvm guest
	I0717 19:45:31.487728 1096384 out.go:177] * [running-upgrade-585114] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:45:31.490297 1096384 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:45:31.490318 1096384 notify.go:220] Checking for updates...
	I0717 19:45:31.492308 1096384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:45:31.494456 1096384 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:45:31.496546 1096384 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:45:31.498596 1096384 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:45:31.500611 1096384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:45:31.503136 1096384 config.go:182] Loaded profile config "running-upgrade-585114": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0717 19:45:31.503162 1096384 start_flags.go:683] config upgrade: Driver=kvm2
	I0717 19:45:31.503174 1096384 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 19:45:31.503248 1096384 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/running-upgrade-585114/config.json ...
	I0717 19:45:31.503845 1096384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version/docker-machine-driver-kvm2
	I0717 19:45:31.503923 1096384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:45:31.535162 1096384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33555
	I0717 19:45:31.535799 1096384 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:45:31.536738 1096384 main.go:141] libmachine: Using API Version  1
	I0717 19:45:31.536772 1096384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:45:31.537378 1096384 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:45:31.537660 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .DriverName
	I0717 19:45:31.540830 1096384 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 19:45:31.543213 1096384 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:45:31.543752 1096384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version/docker-machine-driver-kvm2
	I0717 19:45:31.543837 1096384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:45:31.574517 1096384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0717 19:45:31.574954 1096384 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:45:31.575539 1096384 main.go:141] libmachine: Using API Version  1
	I0717 19:45:31.575570 1096384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:45:31.575977 1096384 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:45:31.576206 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .DriverName
	I0717 19:45:31.615001 1096384 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:45:31.617157 1096384 start.go:298] selected driver: kvm2
	I0717 19:45:31.617181 1096384 start.go:880] validating driver "kvm2" against &{Name:running-upgrade-585114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.48 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:45:31.617300 1096384 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:45:31.618088 1096384 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:32.308070 1096384 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	W0717 19:45:42.341487 1096384 install.go:62] docker-machine-driver-kvm2: exit status 1
	I0717 19:45:42.480009 1096384 out.go:177] * Downloading driver docker-machine-driver-kvm2:
	I0717 19:45:42.638017 1096384 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.30.1/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.30.1/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:45:43.335802 1096384 cni.go:84] Creating CNI manager for ""
	I0717 19:45:43.335839 1096384 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0717 19:45:43.335849 1096384 start_flags.go:319] config:
	{Name:running-upgrade-585114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.48 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:45:43.336082 1096384 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:43.339198 1096384 out.go:177] * Starting control plane node running-upgrade-585114 in cluster running-upgrade-585114
	I0717 19:45:43.341330 1096384 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0717 19:45:43.368166 1096384 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0717 19:45:43.368389 1096384 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/running-upgrade-585114/config.json ...
	I0717 19:45:43.368518 1096384 cache.go:107] acquiring lock: {Name:mk9729e1d18cb52a56136e909fd3ce8f3567c08a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:43.368609 1096384 cache.go:107] acquiring lock: {Name:mk245d32407f97ac302c30fd3bb4e71f8404ae2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:43.368673 1096384 cache.go:115] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 19:45:43.368697 1096384 cache.go:115] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0717 19:45:43.368714 1096384 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 106.945µs
	I0717 19:45:43.368729 1096384 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0717 19:45:43.368694 1096384 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 188.574µs
	I0717 19:45:43.368728 1096384 start.go:365] acquiring machines lock for running-upgrade-585114: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:45:43.368754 1096384 cache.go:107] acquiring lock: {Name:mked8643444471b2a59a57aa76caf81e3ba71ad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:43.368779 1096384 cache.go:107] acquiring lock: {Name:mk6cc7986bf0f7d8a8d0f4f85aaa58d7e95e6170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:43.368795 1096384 cache.go:115] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0717 19:45:43.368802 1096384 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 50.541µs
	I0717 19:45:43.368741 1096384 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 19:45:43.368749 1096384 cache.go:107] acquiring lock: {Name:mk030735a414bd2f3a5d0be81765f0a353901b1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:43.368824 1096384 cache.go:115] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0717 19:45:43.368832 1096384 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 56.204µs
	I0717 19:45:43.368843 1096384 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0717 19:45:43.368722 1096384 cache.go:107] acquiring lock: {Name:mk42c5e717e91d01a12ff47fbdf4185e0db19e0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:43.368881 1096384 cache.go:115] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0717 19:45:43.368919 1096384 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 201.491µs
	I0717 19:45:43.368942 1096384 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0717 19:45:43.368881 1096384 cache.go:115] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0717 19:45:43.368956 1096384 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 207.576µs
	I0717 19:45:43.368964 1096384 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0717 19:45:43.368813 1096384 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0717 19:45:43.368583 1096384 cache.go:107] acquiring lock: {Name:mk449e1e3e84e7247ff963a252dfa10609003db8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:43.368999 1096384 cache.go:115] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0717 19:45:43.369041 1096384 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 453.111µs
	I0717 19:45:43.369053 1096384 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0717 19:45:43.368527 1096384 cache.go:107] acquiring lock: {Name:mkf584e34f70b15a7e4b8c22b37c2c5192ceba1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:43.369082 1096384 cache.go:115] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0717 19:45:43.369091 1096384 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 580.758µs
	I0717 19:45:43.369100 1096384 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0717 19:45:43.369116 1096384 cache.go:87] Successfully saved all images to host disk.
	I0717 19:46:48.190952 1096384 start.go:369] acquired machines lock for "running-upgrade-585114" in 1m4.822177498s
	I0717 19:46:48.191025 1096384 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:46:48.191035 1096384 fix.go:54] fixHost starting: minikube
	I0717 19:46:48.191515 1096384 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:46:48.191584 1096384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:46:48.209752 1096384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40355
	I0717 19:46:48.210306 1096384 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:46:48.210958 1096384 main.go:141] libmachine: Using API Version  1
	I0717 19:46:48.210987 1096384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:46:48.211397 1096384 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:46:48.211631 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .DriverName
	I0717 19:46:48.211887 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetState
	I0717 19:46:48.214245 1096384 fix.go:102] recreateIfNeeded on running-upgrade-585114: state=Running err=<nil>
	W0717 19:46:48.214277 1096384 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:46:48.217357 1096384 out.go:177] * Updating the running kvm2 "running-upgrade-585114" VM ...
	I0717 19:46:48.219313 1096384 machine.go:88] provisioning docker machine ...
	I0717 19:46:48.219353 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .DriverName
	I0717 19:46:48.219687 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetMachineName
	I0717 19:46:48.219895 1096384 buildroot.go:166] provisioning hostname "running-upgrade-585114"
	I0717 19:46:48.219931 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetMachineName
	I0717 19:46:48.220149 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHHostname
	I0717 19:46:48.223549 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:48.224106 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:fc:a4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:43:48 +0000 UTC Type:0 Mac:52:54:00:a9:fc:a4 Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:running-upgrade-585114 Clientid:01:52:54:00:a9:fc:a4}
	I0717 19:46:48.224146 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined IP address 192.168.50.48 and MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:48.224332 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHPort
	I0717 19:46:48.224583 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHKeyPath
	I0717 19:46:48.224798 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHKeyPath
	I0717 19:46:48.225020 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHUsername
	I0717 19:46:48.225256 1096384 main.go:141] libmachine: Using SSH client type: native
	I0717 19:46:48.225854 1096384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0717 19:46:48.225882 1096384 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-585114 && echo "running-upgrade-585114" | sudo tee /etc/hostname
	I0717 19:46:48.370665 1096384 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-585114
	
	I0717 19:46:48.370702 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHHostname
	I0717 19:46:48.374208 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:48.374595 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:fc:a4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:43:48 +0000 UTC Type:0 Mac:52:54:00:a9:fc:a4 Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:running-upgrade-585114 Clientid:01:52:54:00:a9:fc:a4}
	I0717 19:46:48.374631 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined IP address 192.168.50.48 and MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:48.374847 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHPort
	I0717 19:46:48.375068 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHKeyPath
	I0717 19:46:48.375243 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHKeyPath
	I0717 19:46:48.375356 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHUsername
	I0717 19:46:48.375525 1096384 main.go:141] libmachine: Using SSH client type: native
	I0717 19:46:48.376148 1096384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0717 19:46:48.376170 1096384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-585114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-585114/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-585114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:46:48.515045 1096384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:46:48.515086 1096384 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:46:48.515108 1096384 buildroot.go:174] setting up certificates
	I0717 19:46:48.515118 1096384 provision.go:83] configureAuth start
	I0717 19:46:48.515131 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetMachineName
	I0717 19:46:48.515439 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetIP
	I0717 19:46:48.518655 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:48.519190 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:fc:a4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:43:48 +0000 UTC Type:0 Mac:52:54:00:a9:fc:a4 Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:running-upgrade-585114 Clientid:01:52:54:00:a9:fc:a4}
	I0717 19:46:48.519219 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined IP address 192.168.50.48 and MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:48.519426 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHHostname
	I0717 19:46:48.522151 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:48.522595 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:fc:a4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:43:48 +0000 UTC Type:0 Mac:52:54:00:a9:fc:a4 Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:running-upgrade-585114 Clientid:01:52:54:00:a9:fc:a4}
	I0717 19:46:48.522625 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined IP address 192.168.50.48 and MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:48.522769 1096384 provision.go:138] copyHostCerts
	I0717 19:46:48.522834 1096384 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:46:48.522847 1096384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:46:48.522915 1096384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:46:48.523061 1096384 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:46:48.523073 1096384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:46:48.523101 1096384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:46:48.523207 1096384 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:46:48.523221 1096384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:46:48.523251 1096384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:46:48.523399 1096384 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-585114 san=[192.168.50.48 192.168.50.48 localhost 127.0.0.1 minikube running-upgrade-585114]
	I0717 19:46:48.701897 1096384 provision.go:172] copyRemoteCerts
	I0717 19:46:48.702047 1096384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:46:48.702096 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHHostname
	I0717 19:46:48.705107 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:48.705443 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:fc:a4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:43:48 +0000 UTC Type:0 Mac:52:54:00:a9:fc:a4 Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:running-upgrade-585114 Clientid:01:52:54:00:a9:fc:a4}
	I0717 19:46:48.705482 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined IP address 192.168.50.48 and MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:48.705727 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHPort
	I0717 19:46:48.705976 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHKeyPath
	I0717 19:46:48.706173 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHUsername
	I0717 19:46:48.706336 1096384 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/running-upgrade-585114/id_rsa Username:docker}
	I0717 19:46:48.801959 1096384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:46:48.818919 1096384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 19:46:48.843964 1096384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:46:48.861860 1096384 provision.go:86] duration metric: configureAuth took 346.727517ms
	I0717 19:46:48.861897 1096384 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:46:48.862112 1096384 config.go:182] Loaded profile config "running-upgrade-585114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0717 19:46:48.862216 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHHostname
	I0717 19:46:48.865269 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:48.865632 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:fc:a4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:43:48 +0000 UTC Type:0 Mac:52:54:00:a9:fc:a4 Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:running-upgrade-585114 Clientid:01:52:54:00:a9:fc:a4}
	I0717 19:46:48.865684 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined IP address 192.168.50.48 and MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:48.865830 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHPort
	I0717 19:46:48.866057 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHKeyPath
	I0717 19:46:48.866208 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHKeyPath
	I0717 19:46:48.866368 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHUsername
	I0717 19:46:48.866540 1096384 main.go:141] libmachine: Using SSH client type: native
	I0717 19:46:48.867136 1096384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0717 19:46:48.867163 1096384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:46:49.564016 1096384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:46:49.564050 1096384 machine.go:91] provisioned docker machine in 1.344713852s
	I0717 19:46:49.564062 1096384 start.go:300] post-start starting for "running-upgrade-585114" (driver="kvm2")
	I0717 19:46:49.564074 1096384 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:46:49.564112 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .DriverName
	I0717 19:46:49.564540 1096384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:46:49.564585 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHHostname
	I0717 19:46:49.568015 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:49.568469 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:fc:a4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:43:48 +0000 UTC Type:0 Mac:52:54:00:a9:fc:a4 Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:running-upgrade-585114 Clientid:01:52:54:00:a9:fc:a4}
	I0717 19:46:49.568520 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined IP address 192.168.50.48 and MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:49.568697 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHPort
	I0717 19:46:49.568911 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHKeyPath
	I0717 19:46:49.569085 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHUsername
	I0717 19:46:49.569283 1096384 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/running-upgrade-585114/id_rsa Username:docker}
	I0717 19:46:49.663525 1096384 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:46:49.669319 1096384 info.go:137] Remote host: Buildroot 2019.02.7
	I0717 19:46:49.669359 1096384 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:46:49.669443 1096384 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:46:49.669540 1096384 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:46:49.669683 1096384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:46:49.677253 1096384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:46:49.696177 1096384 start.go:303] post-start completed in 132.097537ms
	I0717 19:46:49.696211 1096384 fix.go:56] fixHost completed within 1.505177258s
	I0717 19:46:49.696235 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHHostname
	I0717 19:46:49.699389 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:49.699849 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:fc:a4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:43:48 +0000 UTC Type:0 Mac:52:54:00:a9:fc:a4 Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:running-upgrade-585114 Clientid:01:52:54:00:a9:fc:a4}
	I0717 19:46:49.699894 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined IP address 192.168.50.48 and MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:49.700144 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHPort
	I0717 19:46:49.700379 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHKeyPath
	I0717 19:46:49.700584 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHKeyPath
	I0717 19:46:49.700813 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHUsername
	I0717 19:46:49.701017 1096384 main.go:141] libmachine: Using SSH client type: native
	I0717 19:46:49.701483 1096384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.48 22 <nil> <nil>}
	I0717 19:46:49.701499 1096384 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 19:46:49.841949 1096384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623209.836858116
	
	I0717 19:46:49.842006 1096384 fix.go:206] guest clock: 1689623209.836858116
	I0717 19:46:49.842018 1096384 fix.go:219] Guest: 2023-07-17 19:46:49.836858116 +0000 UTC Remote: 2023-07-17 19:46:49.696215146 +0000 UTC m=+78.263065487 (delta=140.64297ms)
	I0717 19:46:49.842064 1096384 fix.go:190] guest clock delta is within tolerance: 140.64297ms
	I0717 19:46:49.842073 1096384 start.go:83] releasing machines lock for "running-upgrade-585114", held for 1.651080155s
	I0717 19:46:49.842112 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .DriverName
	I0717 19:46:49.843276 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetIP
	I0717 19:46:49.847233 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:49.847778 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:fc:a4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:43:48 +0000 UTC Type:0 Mac:52:54:00:a9:fc:a4 Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:running-upgrade-585114 Clientid:01:52:54:00:a9:fc:a4}
	I0717 19:46:49.847817 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined IP address 192.168.50.48 and MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:49.848040 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .DriverName
	I0717 19:46:49.848754 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .DriverName
	I0717 19:46:49.849000 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .DriverName
	I0717 19:46:49.849148 1096384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:46:49.849224 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHHostname
	I0717 19:46:49.849251 1096384 ssh_runner.go:195] Run: cat /version.json
	I0717 19:46:49.849278 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHHostname
	I0717 19:46:49.852745 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:49.852864 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:49.853223 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:fc:a4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:43:48 +0000 UTC Type:0 Mac:52:54:00:a9:fc:a4 Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:running-upgrade-585114 Clientid:01:52:54:00:a9:fc:a4}
	I0717 19:46:49.853295 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined IP address 192.168.50.48 and MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:49.853453 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHPort
	I0717 19:46:49.853506 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:fc:a4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:43:48 +0000 UTC Type:0 Mac:52:54:00:a9:fc:a4 Iaid: IPaddr:192.168.50.48 Prefix:24 Hostname:running-upgrade-585114 Clientid:01:52:54:00:a9:fc:a4}
	I0717 19:46:49.853543 1096384 main.go:141] libmachine: (running-upgrade-585114) DBG | domain running-upgrade-585114 has defined IP address 192.168.50.48 and MAC address 52:54:00:a9:fc:a4 in network minikube-net
	I0717 19:46:49.853699 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHKeyPath
	I0717 19:46:49.853730 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHPort
	I0717 19:46:49.853925 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHUsername
	I0717 19:46:49.853945 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHKeyPath
	I0717 19:46:49.854138 1096384 main.go:141] libmachine: (running-upgrade-585114) Calling .GetSSHUsername
	I0717 19:46:49.854126 1096384 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/running-upgrade-585114/id_rsa Username:docker}
	I0717 19:46:49.854312 1096384 sshutil.go:53] new ssh client: &{IP:192.168.50.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/running-upgrade-585114/id_rsa Username:docker}
	W0717 19:46:49.951486 1096384 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 19:46:49.951597 1096384 ssh_runner.go:195] Run: systemctl --version
	I0717 19:46:49.985062 1096384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:46:50.147109 1096384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:46:50.154407 1096384 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:46:50.154499 1096384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:46:50.161885 1096384 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 19:46:50.161918 1096384 start.go:469] detecting cgroup driver to use...
	I0717 19:46:50.162001 1096384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:46:50.177545 1096384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:46:50.190329 1096384 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:46:50.190418 1096384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:46:50.214378 1096384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:46:50.228393 1096384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 19:46:50.242387 1096384 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 19:46:50.242484 1096384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:46:50.408085 1096384 docker.go:212] disabling docker service ...
	I0717 19:46:50.408163 1096384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:46:51.438960 1096384 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.030753532s)
	I0717 19:46:51.439079 1096384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:46:51.460383 1096384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:46:51.638807 1096384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:46:51.816903 1096384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:46:51.829001 1096384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:46:51.843277 1096384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0717 19:46:51.843355 1096384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:46:51.853309 1096384 out.go:177] 
	W0717 19:46:51.855400 1096384 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0717 19:46:51.855432 1096384 out.go:239] * 
	* 
	W0717 19:46:51.856330 1096384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:46:51.859934 1096384 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-585114 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-17 19:46:51.881178957 +0000 UTC m=+3813.189874502
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-585114 -n running-upgrade-585114
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-585114 -n running-upgrade-585114: exit status 4 (277.428374ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:46:52.117979 1097173 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-585114" does not appear in /home/jenkins/minikube-integration/16890-1061725/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-585114" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-585114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-585114
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-585114: (1.484806317s)
--- FAIL: TestRunningBinaryUpgrade (222.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (296.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.6.2.3061442780.exe start -p stopped-upgrade-983290 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.6.2.3061442780.exe start -p stopped-upgrade-983290 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m17.342282804s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.6.2.3061442780.exe -p stopped-upgrade-983290 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.6.2.3061442780.exe -p stopped-upgrade-983290 stop: (1m32.916453457s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-983290 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-983290 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m6.461984485s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-983290] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-983290 in cluster stopped-upgrade-983290
	* Restarting existing kvm2 VM for "stopped-upgrade-983290" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:45:16.390553 1094169 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:45:16.390790 1094169 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:45:16.390808 1094169 out.go:309] Setting ErrFile to fd 2...
	I0717 19:45:16.390816 1094169 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:45:16.391129 1094169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:45:16.391971 1094169 out.go:303] Setting JSON to false
	I0717 19:45:16.393457 1094169 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16067,"bootTime":1689607049,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:45:16.393567 1094169 start.go:138] virtualization: kvm guest
	I0717 19:45:16.396818 1094169 out.go:177] * [stopped-upgrade-983290] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:45:16.398935 1094169 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:45:16.398931 1094169 notify.go:220] Checking for updates...
	I0717 19:45:16.400863 1094169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:45:16.402677 1094169 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:45:16.405292 1094169 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:45:16.407077 1094169 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:45:16.408851 1094169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:45:16.417605 1094169 config.go:182] Loaded profile config "stopped-upgrade-983290": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0717 19:45:16.417656 1094169 start_flags.go:683] config upgrade: Driver=kvm2
	I0717 19:45:16.417682 1094169 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 19:45:16.419169 1094169 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/stopped-upgrade-983290/config.json ...
	I0717 19:45:16.420317 1094169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:45:16.420495 1094169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:45:16.438981 1094169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37513
	I0717 19:45:16.439481 1094169 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:45:16.440237 1094169 main.go:141] libmachine: Using API Version  1
	I0717 19:45:16.440277 1094169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:45:16.440645 1094169 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:45:16.440860 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .DriverName
	I0717 19:45:16.443657 1094169 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 19:45:16.445600 1094169 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:45:16.445997 1094169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:45:16.446057 1094169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:45:16.466954 1094169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41557
	I0717 19:45:16.467407 1094169 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:45:16.468062 1094169 main.go:141] libmachine: Using API Version  1
	I0717 19:45:16.468088 1094169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:45:16.468483 1094169 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:45:16.468654 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .DriverName
	I0717 19:45:16.515059 1094169 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:45:16.517075 1094169 start.go:298] selected driver: kvm2
	I0717 19:45:16.517099 1094169 start.go:880] validating driver "kvm2" against &{Name:stopped-upgrade-983290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.54 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:45:16.517257 1094169 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:45:16.518076 1094169 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:16.518196 1094169 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:45:16.535830 1094169 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0717 19:45:16.536180 1094169 cni.go:84] Creating CNI manager for ""
	I0717 19:45:16.536197 1094169 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0717 19:45:16.536205 1094169 start_flags.go:319] config:
	{Name:stopped-upgrade-983290 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.54 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:45:16.536373 1094169 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:16.539135 1094169 out.go:177] * Starting control plane node stopped-upgrade-983290 in cluster stopped-upgrade-983290
	I0717 19:45:16.541176 1094169 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0717 19:45:16.565337 1094169 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0717 19:45:16.565519 1094169 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/stopped-upgrade-983290/config.json ...
	I0717 19:45:16.565643 1094169 cache.go:107] acquiring lock: {Name:mk449e1e3e84e7247ff963a252dfa10609003db8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:16.565663 1094169 cache.go:107] acquiring lock: {Name:mkf584e34f70b15a7e4b8c22b37c2c5192ceba1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:16.565685 1094169 cache.go:107] acquiring lock: {Name:mk42c5e717e91d01a12ff47fbdf4185e0db19e0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:16.565721 1094169 cache.go:107] acquiring lock: {Name:mk030735a414bd2f3a5d0be81765f0a353901b1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:16.565788 1094169 cache.go:107] acquiring lock: {Name:mk245d32407f97ac302c30fd3bb4e71f8404ae2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:16.565816 1094169 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0717 19:45:16.565827 1094169 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 19:45:16.565859 1094169 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0717 19:45:16.565910 1094169 start.go:365] acquiring machines lock for stopped-upgrade-983290: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:45:16.566038 1094169 cache.go:107] acquiring lock: {Name:mk6cc7986bf0f7d8a8d0f4f85aaa58d7e95e6170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:16.566078 1094169 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0717 19:45:16.565643 1094169 cache.go:107] acquiring lock: {Name:mk9729e1d18cb52a56136e909fd3ce8f3567c08a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:16.566279 1094169 cache.go:115] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 19:45:16.566267 1094169 cache.go:107] acquiring lock: {Name:mked8643444471b2a59a57aa76caf81e3ba71ad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:45:16.566123 1094169 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0717 19:45:16.566291 1094169 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 660.998µs
	I0717 19:45:16.566357 1094169 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 19:45:16.566139 1094169 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 19:45:16.566390 1094169 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0717 19:45:16.567606 1094169 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 19:45:16.567601 1094169 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0717 19:45:16.567934 1094169 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0717 19:45:16.567958 1094169 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0717 19:45:16.568015 1094169 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0717 19:45:16.568328 1094169 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0717 19:45:16.568480 1094169 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 19:45:16.765845 1094169 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I0717 19:45:16.770064 1094169 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 19:45:16.770926 1094169 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0717 19:45:16.779389 1094169 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I0717 19:45:16.780290 1094169 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I0717 19:45:16.787298 1094169 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I0717 19:45:16.787537 1094169 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I0717 19:45:16.880730 1094169 cache.go:157] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0717 19:45:16.880760 1094169 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 315.075411ms
	I0717 19:45:16.880774 1094169 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0717 19:45:17.212584 1094169 cache.go:157] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0717 19:45:17.212623 1094169 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 646.58809ms
	I0717 19:45:17.212640 1094169 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0717 19:45:17.652446 1094169 cache.go:157] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0717 19:45:17.652545 1094169 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.086295218s
	I0717 19:45:17.652579 1094169 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0717 19:45:17.844070 1094169 cache.go:157] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0717 19:45:17.844112 1094169 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.27846346s
	I0717 19:45:17.844131 1094169 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0717 19:45:17.868370 1094169 cache.go:157] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0717 19:45:17.868408 1094169 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.302623816s
	I0717 19:45:17.868422 1094169 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0717 19:45:18.282630 1094169 cache.go:157] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0717 19:45:18.282678 1094169 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.716954728s
	I0717 19:45:18.282694 1094169 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0717 19:45:18.358717 1094169 cache.go:157] /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0717 19:45:18.358755 1094169 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.793123448s
	I0717 19:45:18.358768 1094169 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0717 19:45:18.358791 1094169 cache.go:87] Successfully saved all images to host disk.
	I0717 19:45:34.827396 1094169 start.go:369] acquired machines lock for "stopped-upgrade-983290" in 18.261453051s
	I0717 19:45:34.827468 1094169 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:45:34.827490 1094169 fix.go:54] fixHost starting: minikube
	I0717 19:45:34.827881 1094169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:45:34.827929 1094169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:45:34.845977 1094169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0717 19:45:34.846478 1094169 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:45:34.847163 1094169 main.go:141] libmachine: Using API Version  1
	I0717 19:45:34.847193 1094169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:45:34.847621 1094169 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:45:34.847874 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .DriverName
	I0717 19:45:34.848046 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetState
	I0717 19:45:34.849964 1094169 fix.go:102] recreateIfNeeded on stopped-upgrade-983290: state=Stopped err=<nil>
	I0717 19:45:34.850020 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .DriverName
	W0717 19:45:34.850200 1094169 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:45:34.853138 1094169 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-983290" ...
	I0717 19:45:34.855395 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .Start
	I0717 19:45:34.855710 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Ensuring networks are active...
	I0717 19:45:34.856566 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Ensuring network default is active
	I0717 19:45:34.857040 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Ensuring network minikube-net is active
	I0717 19:45:34.857536 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Getting domain xml...
	I0717 19:45:34.858471 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Creating domain...
	I0717 19:45:36.337799 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Waiting to get IP...
	I0717 19:45:36.338984 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:45:36.339519 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has current primary IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:45:36.339547 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Found IP for machine: 192.168.50.54
	I0717 19:45:36.339618 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Reserving static IP address...
	I0717 19:45:36.340085 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "stopped-upgrade-983290", mac: "52:54:00:77:08:9a", ip: "192.168.50.54"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:45:36.340117 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Reserved static IP address: 192.168.50.54
	I0717 19:45:36.340134 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-983290", mac: "52:54:00:77:08:9a", ip: "192.168.50.54"}
	I0717 19:45:36.340149 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | Getting to WaitForSSH function...
	I0717 19:45:36.340161 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Waiting for SSH to be available...
	I0717 19:45:36.342977 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:45:36.343484 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:45:36.343527 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:45:36.343739 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | Using SSH client type: external
	I0717 19:45:36.343767 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/stopped-upgrade-983290/id_rsa (-rw-------)
	I0717 19:45:36.343800 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/stopped-upgrade-983290/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:45:36.343823 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | About to run SSH command:
	I0717 19:45:36.343838 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | exit 0
	I0717 19:45:53.510911 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | SSH cmd err, output: exit status 255: 
	I0717 19:45:53.510947 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 19:45:53.510962 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | command : exit 0
	I0717 19:45:53.510970 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | err     : exit status 255
	I0717 19:45:53.510983 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | output  : 
	I0717 19:45:56.511559 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | Getting to WaitForSSH function...
	I0717 19:45:56.514461 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:45:56.514862 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:45:56.514898 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:45:56.515051 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | Using SSH client type: external
	I0717 19:45:56.515088 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/stopped-upgrade-983290/id_rsa (-rw-------)
	I0717 19:45:56.515125 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/stopped-upgrade-983290/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:45:56.515144 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | About to run SSH command:
	I0717 19:45:56.515158 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | exit 0
	I0717 19:46:03.718605 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | SSH cmd err, output: exit status 255: 
	I0717 19:46:03.718646 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 19:46:03.718668 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | command : exit 0
	I0717 19:46:03.718682 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | err     : exit status 255
	I0717 19:46:03.718698 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | output  : 
	I0717 19:46:06.720186 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | Getting to WaitForSSH function...
	I0717 19:46:06.723700 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:06.724293 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:46:06.724337 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:06.724415 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | Using SSH client type: external
	I0717 19:46:06.724486 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/stopped-upgrade-983290/id_rsa (-rw-------)
	I0717 19:46:06.724534 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/stopped-upgrade-983290/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:46:06.724556 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | About to run SSH command:
	I0717 19:46:06.724572 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | exit 0
	I0717 19:46:08.917472 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | SSH cmd err, output: <nil>: 
	I0717 19:46:08.917964 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetConfigRaw
	I0717 19:46:08.918682 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetIP
	I0717 19:46:08.921858 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:08.922313 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:46:08.922351 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:08.922594 1094169 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/stopped-upgrade-983290/config.json ...
	I0717 19:46:08.922812 1094169 machine.go:88] provisioning docker machine ...
	I0717 19:46:08.922832 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .DriverName
	I0717 19:46:08.923093 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetMachineName
	I0717 19:46:08.923290 1094169 buildroot.go:166] provisioning hostname "stopped-upgrade-983290"
	I0717 19:46:08.923311 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetMachineName
	I0717 19:46:08.923468 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHHostname
	I0717 19:46:08.925814 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:08.926186 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:46:08.926231 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:08.926391 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHPort
	I0717 19:46:08.926626 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHKeyPath
	I0717 19:46:08.926829 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHKeyPath
	I0717 19:46:08.926986 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHUsername
	I0717 19:46:08.927153 1094169 main.go:141] libmachine: Using SSH client type: native
	I0717 19:46:08.927603 1094169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0717 19:46:08.927618 1094169 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-983290 && echo "stopped-upgrade-983290" | sudo tee /etc/hostname
	I0717 19:46:09.050762 1094169 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-983290
	
	I0717 19:46:09.050801 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHHostname
	I0717 19:46:09.054404 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:09.054871 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:46:09.054917 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:09.055133 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHPort
	I0717 19:46:09.055401 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHKeyPath
	I0717 19:46:09.055576 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHKeyPath
	I0717 19:46:09.055778 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHUsername
	I0717 19:46:09.055987 1094169 main.go:141] libmachine: Using SSH client type: native
	I0717 19:46:09.056527 1094169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0717 19:46:09.056547 1094169 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-983290' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-983290/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-983290' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:46:09.175109 1094169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:46:09.175149 1094169 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:46:09.175175 1094169 buildroot.go:174] setting up certificates
	I0717 19:46:09.175185 1094169 provision.go:83] configureAuth start
	I0717 19:46:09.175195 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetMachineName
	I0717 19:46:09.175524 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetIP
	I0717 19:46:09.178503 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:09.178904 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:46:09.178933 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:09.179089 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHHostname
	I0717 19:46:09.181539 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:09.181906 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:46:09.181938 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:09.182129 1094169 provision.go:138] copyHostCerts
	I0717 19:46:09.182209 1094169 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:46:09.182223 1094169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:46:09.182310 1094169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:46:09.182460 1094169 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:46:09.182473 1094169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:46:09.182502 1094169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:46:09.182552 1094169 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:46:09.182559 1094169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:46:09.182578 1094169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:46:09.182622 1094169 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-983290 san=[192.168.50.54 192.168.50.54 localhost 127.0.0.1 minikube stopped-upgrade-983290]
	I0717 19:46:09.464580 1094169 provision.go:172] copyRemoteCerts
	I0717 19:46:09.464645 1094169 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:46:09.464675 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHHostname
	I0717 19:46:09.467902 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:09.468386 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:46:09.468440 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:09.468673 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHPort
	I0717 19:46:09.468953 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHKeyPath
	I0717 19:46:09.469205 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHUsername
	I0717 19:46:09.469407 1094169 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/stopped-upgrade-983290/id_rsa Username:docker}
	I0717 19:46:09.556417 1094169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:46:09.571728 1094169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 19:46:09.588163 1094169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:46:09.602302 1094169 provision.go:86] duration metric: configureAuth took 427.100723ms
	I0717 19:46:09.602337 1094169 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:46:09.602569 1094169 config.go:182] Loaded profile config "stopped-upgrade-983290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0717 19:46:09.602715 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHHostname
	I0717 19:46:09.606153 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:09.606530 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:46:09.606573 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:09.606738 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHPort
	I0717 19:46:09.606985 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHKeyPath
	I0717 19:46:09.607183 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHKeyPath
	I0717 19:46:09.607338 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHUsername
	I0717 19:46:09.607593 1094169 main.go:141] libmachine: Using SSH client type: native
	I0717 19:46:09.608191 1094169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0717 19:46:09.608219 1094169 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:46:21.771251 1094169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:46:21.771289 1094169 machine.go:91] provisioned docker machine in 12.848462443s
	I0717 19:46:21.771305 1094169 start.go:300] post-start starting for "stopped-upgrade-983290" (driver="kvm2")
	I0717 19:46:21.771318 1094169 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:46:21.771377 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .DriverName
	I0717 19:46:21.771828 1094169 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:46:21.771871 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHHostname
	I0717 19:46:21.775374 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:21.775778 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:46:21.775908 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:21.775948 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHPort
	I0717 19:46:21.776160 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHKeyPath
	I0717 19:46:21.776341 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHUsername
	I0717 19:46:21.776448 1094169 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/stopped-upgrade-983290/id_rsa Username:docker}
	I0717 19:46:21.866168 1094169 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:46:21.871209 1094169 info.go:137] Remote host: Buildroot 2019.02.7
	I0717 19:46:21.871245 1094169 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:46:21.871336 1094169 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:46:21.871443 1094169 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:46:21.871578 1094169 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:46:21.878722 1094169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:46:21.895558 1094169 start.go:303] post-start completed in 124.207028ms
	I0717 19:46:21.895596 1094169 fix.go:56] fixHost completed within 47.068106148s
	I0717 19:46:21.895639 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHHostname
	I0717 19:46:21.899122 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:21.899536 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:46:21.899584 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:21.899791 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHPort
	I0717 19:46:21.900067 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHKeyPath
	I0717 19:46:21.900272 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHKeyPath
	I0717 19:46:21.900393 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHUsername
	I0717 19:46:21.900524 1094169 main.go:141] libmachine: Using SSH client type: native
	I0717 19:46:21.901065 1094169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0717 19:46:21.901088 1094169 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 19:46:22.014907 1094169 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623181.945262198
	
	I0717 19:46:22.014947 1094169 fix.go:206] guest clock: 1689623181.945262198
	I0717 19:46:22.014958 1094169 fix.go:219] Guest: 2023-07-17 19:46:21.945262198 +0000 UTC Remote: 2023-07-17 19:46:21.89560047 +0000 UTC m=+65.563791107 (delta=49.661728ms)
	I0717 19:46:22.014987 1094169 fix.go:190] guest clock delta is within tolerance: 49.661728ms
	I0717 19:46:22.014994 1094169 start.go:83] releasing machines lock for "stopped-upgrade-983290", held for 47.187552281s
	I0717 19:46:22.015039 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .DriverName
	I0717 19:46:22.015422 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetIP
	I0717 19:46:22.018981 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:22.019423 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:46:22.019460 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:22.019708 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .DriverName
	I0717 19:46:22.020470 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .DriverName
	I0717 19:46:22.020716 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .DriverName
	I0717 19:46:22.020854 1094169 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:46:22.020937 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHHostname
	I0717 19:46:22.020958 1094169 ssh_runner.go:195] Run: cat /version.json
	I0717 19:46:22.020986 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHHostname
	I0717 19:46:22.025034 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:22.025186 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:22.025546 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:46:22.025601 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:22.025730 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:08:9a", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 20:42:01 +0000 UTC Type:0 Mac:52:54:00:77:08:9a Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:stopped-upgrade-983290 Clientid:01:52:54:00:77:08:9a}
	I0717 19:46:22.025770 1094169 main.go:141] libmachine: (stopped-upgrade-983290) DBG | domain stopped-upgrade-983290 has defined IP address 192.168.50.54 and MAC address 52:54:00:77:08:9a in network minikube-net
	I0717 19:46:22.025783 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHPort
	I0717 19:46:22.025919 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHPort
	I0717 19:46:22.026026 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHKeyPath
	I0717 19:46:22.026117 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHKeyPath
	I0717 19:46:22.026168 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHUsername
	I0717 19:46:22.026243 1094169 main.go:141] libmachine: (stopped-upgrade-983290) Calling .GetSSHUsername
	I0717 19:46:22.026332 1094169 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/stopped-upgrade-983290/id_rsa Username:docker}
	I0717 19:46:22.026409 1094169 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/stopped-upgrade-983290/id_rsa Username:docker}
	W0717 19:46:22.138982 1094169 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 19:46:22.139118 1094169 ssh_runner.go:195] Run: systemctl --version
	I0717 19:46:22.144707 1094169 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:46:22.333004 1094169 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:46:22.338893 1094169 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:46:22.338975 1094169 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:46:22.345188 1094169 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 19:46:22.345216 1094169 start.go:469] detecting cgroup driver to use...
	I0717 19:46:22.345295 1094169 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:46:22.364052 1094169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:46:22.376103 1094169 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:46:22.376189 1094169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:46:22.386072 1094169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:46:22.396279 1094169 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 19:46:22.405983 1094169 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 19:46:22.406041 1094169 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:46:22.515454 1094169 docker.go:212] disabling docker service ...
	I0717 19:46:22.515539 1094169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:46:22.528289 1094169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:46:22.537035 1094169 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:46:22.636072 1094169 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:46:22.737265 1094169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:46:22.748414 1094169 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:46:22.763768 1094169 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0717 19:46:22.763857 1094169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:46:22.775096 1094169 out.go:177] 
	W0717 19:46:22.777318 1094169 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0717 19:46:22.777353 1094169 out.go:239] * 
	* 
	W0717 19:46:22.778206 1094169 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:46:22.781090 1094169 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-983290 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (296.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (69.19s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-882959 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0717 19:43:01.330605 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-882959 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.499766197s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-882959] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-882959 in cluster pause-882959
	* Updating the running kvm2 "pause-882959" VM ...
	* Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-882959" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:43:00.546861 1092475 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:43:00.547034 1092475 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:43:00.547085 1092475 out.go:309] Setting ErrFile to fd 2...
	I0717 19:43:00.547095 1092475 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:43:00.547311 1092475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:43:00.547962 1092475 out.go:303] Setting JSON to false
	I0717 19:43:00.549422 1092475 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15932,"bootTime":1689607049,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:43:00.549505 1092475 start.go:138] virtualization: kvm guest
	I0717 19:43:00.552903 1092475 out.go:177] * [pause-882959] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:43:00.555514 1092475 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:43:00.557433 1092475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:43:00.555546 1092475 notify.go:220] Checking for updates...
	I0717 19:43:00.559359 1092475 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:43:00.561294 1092475 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:43:00.563009 1092475 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:43:00.564859 1092475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:43:00.567391 1092475 config.go:182] Loaded profile config "pause-882959": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:43:00.567850 1092475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:43:00.567946 1092475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:43:00.584185 1092475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0717 19:43:00.584823 1092475 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:43:00.585477 1092475 main.go:141] libmachine: Using API Version  1
	I0717 19:43:00.585496 1092475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:43:00.585956 1092475 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:43:00.586154 1092475 main.go:141] libmachine: (pause-882959) Calling .DriverName
	I0717 19:43:00.586433 1092475 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:43:00.586740 1092475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:43:00.586781 1092475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:43:00.602612 1092475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43821
	I0717 19:43:00.603147 1092475 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:43:00.603834 1092475 main.go:141] libmachine: Using API Version  1
	I0717 19:43:00.603871 1092475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:43:00.604304 1092475 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:43:00.604530 1092475 main.go:141] libmachine: (pause-882959) Calling .DriverName
	I0717 19:43:00.646191 1092475 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:43:00.647918 1092475 start.go:298] selected driver: kvm2
	I0717 19:43:00.647943 1092475 start.go:880] validating driver "kvm2" against &{Name:pause-882959 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-882959 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.161 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:43:00.648144 1092475 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:43:00.648622 1092475 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:43:00.648735 1092475 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:43:00.673523 1092475 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0717 19:43:00.674280 1092475 cni.go:84] Creating CNI manager for ""
	I0717 19:43:00.674306 1092475 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:43:00.674317 1092475 start_flags.go:319] config:
	{Name:pause-882959 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-882959 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.161 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:fal
se storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:43:00.674576 1092475 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:43:00.676917 1092475 out.go:177] * Starting control plane node pause-882959 in cluster pause-882959
	I0717 19:43:00.678980 1092475 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:43:00.679052 1092475 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 19:43:00.679064 1092475 cache.go:57] Caching tarball of preloaded images
	I0717 19:43:00.679218 1092475 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:43:00.679258 1092475 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:43:00.679482 1092475 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/pause-882959/config.json ...
	I0717 19:43:00.679819 1092475 start.go:365] acquiring machines lock for pause-882959: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:43:00.679898 1092475 start.go:369] acquired machines lock for "pause-882959" in 47.303µs
	I0717 19:43:00.679920 1092475 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:43:00.679931 1092475 fix.go:54] fixHost starting: 
	I0717 19:43:00.680327 1092475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:43:00.680387 1092475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:43:00.696612 1092475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33085
	I0717 19:43:00.697157 1092475 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:43:00.697888 1092475 main.go:141] libmachine: Using API Version  1
	I0717 19:43:00.697923 1092475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:43:00.698337 1092475 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:43:00.698612 1092475 main.go:141] libmachine: (pause-882959) Calling .DriverName
	I0717 19:43:00.698806 1092475 main.go:141] libmachine: (pause-882959) Calling .GetState
	I0717 19:43:00.700841 1092475 fix.go:102] recreateIfNeeded on pause-882959: state=Running err=<nil>
	W0717 19:43:00.700885 1092475 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:43:00.702998 1092475 out.go:177] * Updating the running kvm2 "pause-882959" VM ...
	I0717 19:43:00.705006 1092475 machine.go:88] provisioning docker machine ...
	I0717 19:43:00.705039 1092475 main.go:141] libmachine: (pause-882959) Calling .DriverName
	I0717 19:43:00.705460 1092475 main.go:141] libmachine: (pause-882959) Calling .GetMachineName
	I0717 19:43:00.705787 1092475 buildroot.go:166] provisioning hostname "pause-882959"
	I0717 19:43:00.705819 1092475 main.go:141] libmachine: (pause-882959) Calling .GetMachineName
	I0717 19:43:00.706037 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHHostname
	I0717 19:43:00.709324 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:00.709792 1092475 main.go:141] libmachine: (pause-882959) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:67:41", ip: ""} in network mk-pause-882959: {Iface:virbr3 ExpiryTime:2023-07-17 20:42:08 +0000 UTC Type:0 Mac:52:54:00:8d:67:41 Iaid: IPaddr:192.168.61.161 Prefix:24 Hostname:pause-882959 Clientid:01:52:54:00:8d:67:41}
	I0717 19:43:00.709824 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined IP address 192.168.61.161 and MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:00.710035 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHPort
	I0717 19:43:00.710231 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHKeyPath
	I0717 19:43:00.710404 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHKeyPath
	I0717 19:43:00.710626 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHUsername
	I0717 19:43:00.710809 1092475 main.go:141] libmachine: Using SSH client type: native
	I0717 19:43:00.711261 1092475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.161 22 <nil> <nil>}
	I0717 19:43:00.711279 1092475 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-882959 && echo "pause-882959" | sudo tee /etc/hostname
	I0717 19:43:00.881240 1092475 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-882959
	
	I0717 19:43:00.881277 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHHostname
	I0717 19:43:00.884515 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:00.885025 1092475 main.go:141] libmachine: (pause-882959) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:67:41", ip: ""} in network mk-pause-882959: {Iface:virbr3 ExpiryTime:2023-07-17 20:42:08 +0000 UTC Type:0 Mac:52:54:00:8d:67:41 Iaid: IPaddr:192.168.61.161 Prefix:24 Hostname:pause-882959 Clientid:01:52:54:00:8d:67:41}
	I0717 19:43:00.885068 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined IP address 192.168.61.161 and MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:00.885413 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHPort
	I0717 19:43:00.885732 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHKeyPath
	I0717 19:43:00.885962 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHKeyPath
	I0717 19:43:00.886155 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHUsername
	I0717 19:43:00.886355 1092475 main.go:141] libmachine: Using SSH client type: native
	I0717 19:43:00.886892 1092475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.161 22 <nil> <nil>}
	I0717 19:43:00.886923 1092475 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-882959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-882959/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-882959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:43:01.024158 1092475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:43:01.024197 1092475 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:43:01.024226 1092475 buildroot.go:174] setting up certificates
	I0717 19:43:01.024248 1092475 provision.go:83] configureAuth start
	I0717 19:43:01.024268 1092475 main.go:141] libmachine: (pause-882959) Calling .GetMachineName
	I0717 19:43:01.024632 1092475 main.go:141] libmachine: (pause-882959) Calling .GetIP
	I0717 19:43:01.027638 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:01.028078 1092475 main.go:141] libmachine: (pause-882959) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:67:41", ip: ""} in network mk-pause-882959: {Iface:virbr3 ExpiryTime:2023-07-17 20:42:08 +0000 UTC Type:0 Mac:52:54:00:8d:67:41 Iaid: IPaddr:192.168.61.161 Prefix:24 Hostname:pause-882959 Clientid:01:52:54:00:8d:67:41}
	I0717 19:43:01.028111 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined IP address 192.168.61.161 and MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:01.028337 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHHostname
	I0717 19:43:01.030882 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:01.031278 1092475 main.go:141] libmachine: (pause-882959) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:67:41", ip: ""} in network mk-pause-882959: {Iface:virbr3 ExpiryTime:2023-07-17 20:42:08 +0000 UTC Type:0 Mac:52:54:00:8d:67:41 Iaid: IPaddr:192.168.61.161 Prefix:24 Hostname:pause-882959 Clientid:01:52:54:00:8d:67:41}
	I0717 19:43:01.031311 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined IP address 192.168.61.161 and MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:01.031536 1092475 provision.go:138] copyHostCerts
	I0717 19:43:01.031637 1092475 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:43:01.031651 1092475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:43:01.031719 1092475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:43:01.031841 1092475 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:43:01.031854 1092475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:43:01.031880 1092475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:43:01.031949 1092475 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:43:01.031959 1092475 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:43:01.031984 1092475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:43:01.032063 1092475 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.pause-882959 san=[192.168.61.161 192.168.61.161 localhost 127.0.0.1 minikube pause-882959]
	I0717 19:43:01.216269 1092475 provision.go:172] copyRemoteCerts
	I0717 19:43:01.216374 1092475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:43:01.216425 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHHostname
	I0717 19:43:01.221185 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:01.221675 1092475 main.go:141] libmachine: (pause-882959) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:67:41", ip: ""} in network mk-pause-882959: {Iface:virbr3 ExpiryTime:2023-07-17 20:42:08 +0000 UTC Type:0 Mac:52:54:00:8d:67:41 Iaid: IPaddr:192.168.61.161 Prefix:24 Hostname:pause-882959 Clientid:01:52:54:00:8d:67:41}
	I0717 19:43:01.221763 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined IP address 192.168.61.161 and MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:01.222134 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHPort
	I0717 19:43:01.222392 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHKeyPath
	I0717 19:43:01.222592 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHUsername
	I0717 19:43:01.222755 1092475 sshutil.go:53] new ssh client: &{IP:192.168.61.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/pause-882959/id_rsa Username:docker}
	I0717 19:43:01.327123 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:43:01.363178 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0717 19:43:01.395258 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:43:01.428504 1092475 provision.go:86] duration metric: configureAuth took 404.237692ms
	I0717 19:43:01.428544 1092475 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:43:01.428863 1092475 config.go:182] Loaded profile config "pause-882959": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:43:01.428968 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHHostname
	I0717 19:43:01.432571 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:01.432948 1092475 main.go:141] libmachine: (pause-882959) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:67:41", ip: ""} in network mk-pause-882959: {Iface:virbr3 ExpiryTime:2023-07-17 20:42:08 +0000 UTC Type:0 Mac:52:54:00:8d:67:41 Iaid: IPaddr:192.168.61.161 Prefix:24 Hostname:pause-882959 Clientid:01:52:54:00:8d:67:41}
	I0717 19:43:01.432983 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined IP address 192.168.61.161 and MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:01.433247 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHPort
	I0717 19:43:01.433460 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHKeyPath
	I0717 19:43:01.433669 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHKeyPath
	I0717 19:43:01.433876 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHUsername
	I0717 19:43:01.434122 1092475 main.go:141] libmachine: Using SSH client type: native
	I0717 19:43:01.434781 1092475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.161 22 <nil> <nil>}
	I0717 19:43:01.434814 1092475 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:43:10.099275 1092475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:43:10.099304 1092475 machine.go:91] provisioned docker machine in 9.394278502s
	I0717 19:43:10.099317 1092475 start.go:300] post-start starting for "pause-882959" (driver="kvm2")
	I0717 19:43:10.099329 1092475 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:43:10.099352 1092475 main.go:141] libmachine: (pause-882959) Calling .DriverName
	I0717 19:43:10.099721 1092475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:43:10.099757 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHHostname
	I0717 19:43:10.652883 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:10.653277 1092475 main.go:141] libmachine: (pause-882959) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:67:41", ip: ""} in network mk-pause-882959: {Iface:virbr3 ExpiryTime:2023-07-17 20:42:08 +0000 UTC Type:0 Mac:52:54:00:8d:67:41 Iaid: IPaddr:192.168.61.161 Prefix:24 Hostname:pause-882959 Clientid:01:52:54:00:8d:67:41}
	I0717 19:43:10.653310 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined IP address 192.168.61.161 and MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:10.653852 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHPort
	I0717 19:43:10.654107 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHKeyPath
	I0717 19:43:10.654281 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHUsername
	I0717 19:43:10.654430 1092475 sshutil.go:53] new ssh client: &{IP:192.168.61.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/pause-882959/id_rsa Username:docker}
	I0717 19:43:10.747373 1092475 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:43:10.752012 1092475 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:43:10.752050 1092475 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:43:10.752123 1092475 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:43:10.752219 1092475 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:43:10.752353 1092475 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:43:10.760863 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:43:10.784720 1092475 start.go:303] post-start completed in 685.385047ms
	I0717 19:43:10.784754 1092475 fix.go:56] fixHost completed within 10.104823769s
	I0717 19:43:10.784784 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHHostname
	I0717 19:43:10.787800 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:10.788338 1092475 main.go:141] libmachine: (pause-882959) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:67:41", ip: ""} in network mk-pause-882959: {Iface:virbr3 ExpiryTime:2023-07-17 20:42:08 +0000 UTC Type:0 Mac:52:54:00:8d:67:41 Iaid: IPaddr:192.168.61.161 Prefix:24 Hostname:pause-882959 Clientid:01:52:54:00:8d:67:41}
	I0717 19:43:10.788372 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined IP address 192.168.61.161 and MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:10.788632 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHPort
	I0717 19:43:10.788862 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHKeyPath
	I0717 19:43:10.789011 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHKeyPath
	I0717 19:43:10.789137 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHUsername
	I0717 19:43:10.789290 1092475 main.go:141] libmachine: Using SSH client type: native
	I0717 19:43:10.789796 1092475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.161 22 <nil> <nil>}
	I0717 19:43:10.789811 1092475 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 19:43:10.924073 1092475 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689622990.920741917
	
	I0717 19:43:10.924102 1092475 fix.go:206] guest clock: 1689622990.920741917
	I0717 19:43:10.924114 1092475 fix.go:219] Guest: 2023-07-17 19:43:10.920741917 +0000 UTC Remote: 2023-07-17 19:43:10.784759408 +0000 UTC m=+10.280332000 (delta=135.982509ms)
	I0717 19:43:10.924169 1092475 fix.go:190] guest clock delta is within tolerance: 135.982509ms
	I0717 19:43:10.924177 1092475 start.go:83] releasing machines lock for "pause-882959", held for 10.244265574s
	I0717 19:43:10.924213 1092475 main.go:141] libmachine: (pause-882959) Calling .DriverName
	I0717 19:43:10.924524 1092475 main.go:141] libmachine: (pause-882959) Calling .GetIP
	I0717 19:43:10.928621 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:10.929105 1092475 main.go:141] libmachine: (pause-882959) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:67:41", ip: ""} in network mk-pause-882959: {Iface:virbr3 ExpiryTime:2023-07-17 20:42:08 +0000 UTC Type:0 Mac:52:54:00:8d:67:41 Iaid: IPaddr:192.168.61.161 Prefix:24 Hostname:pause-882959 Clientid:01:52:54:00:8d:67:41}
	I0717 19:43:10.929132 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined IP address 192.168.61.161 and MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:10.929396 1092475 main.go:141] libmachine: (pause-882959) Calling .DriverName
	I0717 19:43:10.930415 1092475 main.go:141] libmachine: (pause-882959) Calling .DriverName
	I0717 19:43:10.930648 1092475 main.go:141] libmachine: (pause-882959) Calling .DriverName
	I0717 19:43:10.930755 1092475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:43:10.930815 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHHostname
	I0717 19:43:10.930948 1092475 ssh_runner.go:195] Run: cat /version.json
	I0717 19:43:10.930984 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHHostname
	I0717 19:43:10.935639 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:10.935902 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:10.936216 1092475 main.go:141] libmachine: (pause-882959) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:67:41", ip: ""} in network mk-pause-882959: {Iface:virbr3 ExpiryTime:2023-07-17 20:42:08 +0000 UTC Type:0 Mac:52:54:00:8d:67:41 Iaid: IPaddr:192.168.61.161 Prefix:24 Hostname:pause-882959 Clientid:01:52:54:00:8d:67:41}
	I0717 19:43:10.936286 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined IP address 192.168.61.161 and MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:10.936783 1092475 main.go:141] libmachine: (pause-882959) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:67:41", ip: ""} in network mk-pause-882959: {Iface:virbr3 ExpiryTime:2023-07-17 20:42:08 +0000 UTC Type:0 Mac:52:54:00:8d:67:41 Iaid: IPaddr:192.168.61.161 Prefix:24 Hostname:pause-882959 Clientid:01:52:54:00:8d:67:41}
	I0717 19:43:10.936823 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined IP address 192.168.61.161 and MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:10.936828 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHPort
	I0717 19:43:10.936998 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHPort
	I0717 19:43:10.937095 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHKeyPath
	I0717 19:43:10.937288 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHUsername
	I0717 19:43:10.937294 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHKeyPath
	I0717 19:43:10.937506 1092475 sshutil.go:53] new ssh client: &{IP:192.168.61.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/pause-882959/id_rsa Username:docker}
	I0717 19:43:10.938225 1092475 main.go:141] libmachine: (pause-882959) Calling .GetSSHUsername
	I0717 19:43:10.938448 1092475 sshutil.go:53] new ssh client: &{IP:192.168.61.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/pause-882959/id_rsa Username:docker}
	W0717 19:43:11.065611 1092475 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:43:11.065720 1092475 ssh_runner.go:195] Run: systemctl --version
	I0717 19:43:11.084461 1092475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:43:11.273216 1092475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:43:11.282941 1092475 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:43:11.283039 1092475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:43:11.296518 1092475 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 19:43:11.296555 1092475 start.go:469] detecting cgroup driver to use...
	I0717 19:43:11.296665 1092475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:43:11.322361 1092475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:43:11.344089 1092475 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:43:11.344162 1092475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:43:11.361627 1092475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:43:11.378655 1092475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:43:11.574548 1092475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:43:11.737607 1092475 docker.go:212] disabling docker service ...
	I0717 19:43:11.737717 1092475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:43:11.755529 1092475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:43:11.772626 1092475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:43:12.535853 1092475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:43:12.887365 1092475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:43:12.915921 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:43:12.961122 1092475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:43:12.961212 1092475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:43:12.992802 1092475 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:43:12.992891 1092475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:43:13.011381 1092475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:43:13.033838 1092475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:43:13.054148 1092475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:43:13.072564 1092475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:43:13.090442 1092475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:43:13.124208 1092475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:43:13.456763 1092475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:43:15.360960 1092475 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.904089409s)
	I0717 19:43:15.361005 1092475 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:43:15.361066 1092475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:43:15.368584 1092475 start.go:537] Will wait 60s for crictl version
	I0717 19:43:15.368663 1092475 ssh_runner.go:195] Run: which crictl
	I0717 19:43:15.373486 1092475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:43:15.414454 1092475 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:43:15.414571 1092475 ssh_runner.go:195] Run: crio --version
	I0717 19:43:15.482324 1092475 ssh_runner.go:195] Run: crio --version
	I0717 19:43:15.549964 1092475 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:43:15.551366 1092475 main.go:141] libmachine: (pause-882959) Calling .GetIP
	I0717 19:43:15.554584 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:15.555041 1092475 main.go:141] libmachine: (pause-882959) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:67:41", ip: ""} in network mk-pause-882959: {Iface:virbr3 ExpiryTime:2023-07-17 20:42:08 +0000 UTC Type:0 Mac:52:54:00:8d:67:41 Iaid: IPaddr:192.168.61.161 Prefix:24 Hostname:pause-882959 Clientid:01:52:54:00:8d:67:41}
	I0717 19:43:15.555076 1092475 main.go:141] libmachine: (pause-882959) DBG | domain pause-882959 has defined IP address 192.168.61.161 and MAC address 52:54:00:8d:67:41 in network mk-pause-882959
	I0717 19:43:15.555603 1092475 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 19:43:15.560855 1092475 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:43:15.560925 1092475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:43:15.628206 1092475 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:43:15.628266 1092475 crio.go:415] Images already preloaded, skipping extraction
	I0717 19:43:15.628336 1092475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:43:15.681457 1092475 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:43:15.681483 1092475 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:43:15.681594 1092475 ssh_runner.go:195] Run: crio config
	I0717 19:43:15.778535 1092475 cni.go:84] Creating CNI manager for ""
	I0717 19:43:15.778568 1092475 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:43:15.778602 1092475 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:43:15.778628 1092475 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.161 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-882959 NodeName:pause-882959 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:43:15.778843 1092475 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-882959"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:43:15.778965 1092475 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-882959 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:pause-882959 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:43:15.779053 1092475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:43:15.790725 1092475 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:43:15.790832 1092475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:43:15.803151 1092475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0717 19:43:15.825652 1092475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:43:15.849832 1092475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0717 19:43:15.872905 1092475 ssh_runner.go:195] Run: grep 192.168.61.161	control-plane.minikube.internal$ /etc/hosts
	I0717 19:43:15.879270 1092475 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/pause-882959 for IP: 192.168.61.161
	I0717 19:43:15.879308 1092475 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:43:15.879487 1092475 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:43:15.879549 1092475 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:43:15.879644 1092475 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/pause-882959/client.key
	I0717 19:43:15.879717 1092475 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/pause-882959/apiserver.key.0d8cba74
	I0717 19:43:15.879772 1092475 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/pause-882959/proxy-client.key
	I0717 19:43:15.879931 1092475 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:43:15.879968 1092475 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:43:15.879980 1092475 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:43:15.880014 1092475 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:43:15.880046 1092475 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:43:15.880085 1092475 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:43:15.880153 1092475 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:43:15.881793 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/pause-882959/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:43:15.918233 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/pause-882959/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:43:15.951883 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/pause-882959/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:43:15.982870 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/pause-882959/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:43:16.013574 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:43:16.043658 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:43:16.073898 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:43:16.105587 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:43:16.139761 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:43:16.173423 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:43:16.204054 1092475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:43:16.237329 1092475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:43:16.259587 1092475 ssh_runner.go:195] Run: openssl version
	I0717 19:43:16.266582 1092475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:43:16.278454 1092475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:43:16.284963 1092475 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:43:16.285115 1092475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:43:16.291846 1092475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:43:16.301641 1092475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:43:16.312993 1092475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:43:16.319766 1092475 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:43:16.319842 1092475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:43:16.327716 1092475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:43:16.340906 1092475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:43:16.353372 1092475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:43:16.359398 1092475 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:43:16.359471 1092475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:43:16.366199 1092475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:43:16.376947 1092475 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:43:16.382595 1092475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:43:16.389860 1092475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:43:16.396744 1092475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:43:16.403696 1092475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:43:16.412156 1092475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:43:16.418560 1092475 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:43:16.425223 1092475 kubeadm.go:404] StartCluster: {Name:pause-882959 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-882959 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.161 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registr
y-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:43:16.425381 1092475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:43:16.425448 1092475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:43:16.464483 1092475 cri.go:89] found id: "49baf58ec15908cae47539fcabe15ec5881d91984f515fc8893d1e5ba4fa945a"
	I0717 19:43:16.464517 1092475 cri.go:89] found id: "e27349b47eb29995d13e036904ad2489fbdd0158ab94dd5af41f5b54a54b23b4"
	I0717 19:43:16.464524 1092475 cri.go:89] found id: "3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab"
	I0717 19:43:16.464530 1092475 cri.go:89] found id: "dd136db1eb6034e459ff67b5521d6d3b4cdfed3e633f6c912bd853160d4d89af"
	I0717 19:43:16.464534 1092475 cri.go:89] found id: "303dedd41efda4df68fc27ba33a296176fe13089eccec1d1c7c7e0acba2026f4"
	I0717 19:43:16.464539 1092475 cri.go:89] found id: "4c073da8a04365a46710029851ee407893c131ec9a0206f3896ff2e8d7e32023"
	I0717 19:43:16.464544 1092475 cri.go:89] found id: "5e89c055cbdef19b4c97a2491a9b4f8f0543b60a2ec2615f5dceb70f936d7ab9"
	I0717 19:43:16.464548 1092475 cri.go:89] found id: "3d8a12439304a988c8b9984070098ba9b7d9ba82fd3fab8eab6b615a9de29c2a"
	I0717 19:43:16.464554 1092475 cri.go:89] found id: ""
	I0717 19:43:16.464617 1092475 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-882959 -n pause-882959
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-882959 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-882959 logs -n 25: (2.157547171s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p multinode-464644-m03        | multinode-464644-m03      | jenkins | v1.30.1 | 17 Jul 23 19:34 UTC | 17 Jul 23 19:34 UTC |
	| delete  | -p multinode-464644            | multinode-464644          | jenkins | v1.30.1 | 17 Jul 23 19:34 UTC | 17 Jul 23 19:34 UTC |
	| start   | -p test-preload-585582         | test-preload-585582       | jenkins | v1.30.1 | 17 Jul 23 19:34 UTC | 17 Jul 23 19:37 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true  |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2  |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                           |         |         |                     |                     |
	| image   | test-preload-585582 image pull | test-preload-585582       | jenkins | v1.30.1 | 17 Jul 23 19:37 UTC | 17 Jul 23 19:37 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                           |         |         |                     |                     |
	| stop    | -p test-preload-585582         | test-preload-585582       | jenkins | v1.30.1 | 17 Jul 23 19:37 UTC |                     |
	| delete  | -p test-preload-585582         | test-preload-585582       | jenkins | v1.30.1 | 17 Jul 23 19:39 UTC | 17 Jul 23 19:39 UTC |
	| start   | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:39 UTC | 17 Jul 23 19:40 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC | 17 Jul 23 19:40 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC | 17 Jul 23 19:41 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:41 UTC | 17 Jul 23 19:41 UTC |
	| start   | -p pause-882959 --memory=2048  | pause-882959              | jenkins | v1.30.1 | 17 Jul 23 19:41 UTC | 17 Jul 23 19:43 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-814891         | offline-crio-814891       | jenkins | v1.30.1 | 17 Jul 23 19:41 UTC | 17 Jul 23 19:43 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-852374   | kubernetes-upgrade-852374 | jenkins | v1.30.1 | 17 Jul 23 19:41 UTC | 17 Jul 23 19:43 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-882959                | pause-882959              | jenkins | v1.30.1 | 17 Jul 23 19:43 UTC | 17 Jul 23 19:44 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-814891         | offline-crio-814891       | jenkins | v1.30.1 | 17 Jul 23 19:43 UTC | 17 Jul 23 19:43 UTC |
	| stop    | -p kubernetes-upgrade-852374   | kubernetes-upgrade-852374 | jenkins | v1.30.1 | 17 Jul 23 19:43 UTC | 17 Jul 23 19:43 UTC |
	| start   | -p kubernetes-upgrade-852374   | kubernetes-upgrade-852374 | jenkins | v1.30.1 | 17 Jul 23 19:43 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 19:43:33
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:43:33.884417 1092821 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:43:33.884611 1092821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:43:33.884625 1092821 out.go:309] Setting ErrFile to fd 2...
	I0717 19:43:33.884631 1092821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:43:33.884973 1092821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:43:33.885851 1092821 out.go:303] Setting JSON to false
	I0717 19:43:33.887271 1092821 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15965,"bootTime":1689607049,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:43:33.887372 1092821 start.go:138] virtualization: kvm guest
	I0717 19:43:33.890836 1092821 out.go:177] * [kubernetes-upgrade-852374] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:43:33.893319 1092821 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:43:33.893349 1092821 notify.go:220] Checking for updates...
	I0717 19:43:33.895536 1092821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:43:33.898241 1092821 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:43:33.900244 1092821 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:43:33.901946 1092821 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:43:33.903989 1092821 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:43:33.906296 1092821 config.go:182] Loaded profile config "kubernetes-upgrade-852374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 19:43:33.906660 1092821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:43:33.906746 1092821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:43:33.928006 1092821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37147
	I0717 19:43:33.928651 1092821 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:43:33.929427 1092821 main.go:141] libmachine: Using API Version  1
	I0717 19:43:33.929454 1092821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:43:33.929862 1092821 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:43:33.930101 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:33.930420 1092821 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:43:33.930893 1092821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:43:33.930942 1092821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:43:33.950780 1092821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0717 19:43:33.951299 1092821 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:43:33.951929 1092821 main.go:141] libmachine: Using API Version  1
	I0717 19:43:33.951953 1092821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:43:33.952306 1092821 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:43:33.952507 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:33.995902 1092821 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:43:33.997723 1092821 start.go:298] selected driver: kvm2
	I0717 19:43:33.997748 1092821 start.go:880] validating driver "kvm2" against &{Name:kubernetes-upgrade-852374 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubern
etes-upgrade-852374 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:43:33.997911 1092821 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:43:33.998934 1092821 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:43:33.999054 1092821 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:43:34.015977 1092821 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0717 19:43:34.016501 1092821 cni.go:84] Creating CNI manager for ""
	I0717 19:43:34.016532 1092821 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:43:34.016542 1092821 start_flags.go:319] config:
	{Name:kubernetes-upgrade-852374 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-852374 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:43:34.016738 1092821 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:43:34.019408 1092821 out.go:177] * Starting control plane node kubernetes-upgrade-852374 in cluster kubernetes-upgrade-852374
	I0717 19:43:34.022110 1092821 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:43:34.022193 1092821 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 19:43:34.022208 1092821 cache.go:57] Caching tarball of preloaded images
	I0717 19:43:34.022315 1092821 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:43:34.022330 1092821 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:43:34.022512 1092821 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kubernetes-upgrade-852374/config.json ...
	I0717 19:43:34.022792 1092821 start.go:365] acquiring machines lock for kubernetes-upgrade-852374: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:43:34.022857 1092821 start.go:369] acquired machines lock for "kubernetes-upgrade-852374" in 37.825µs
	I0717 19:43:34.022979 1092821 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:43:34.023002 1092821 fix.go:54] fixHost starting: 
	I0717 19:43:34.023505 1092821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:43:34.023558 1092821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:43:34.039422 1092821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0717 19:43:34.039975 1092821 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:43:34.040640 1092821 main.go:141] libmachine: Using API Version  1
	I0717 19:43:34.040668 1092821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:43:34.041046 1092821 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:43:34.041307 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:34.041474 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetState
	I0717 19:43:34.043498 1092821 fix.go:102] recreateIfNeeded on kubernetes-upgrade-852374: state=Stopped err=<nil>
	I0717 19:43:34.043538 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	W0717 19:43:34.043721 1092821 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:43:34.046452 1092821 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-852374" ...
	I0717 19:43:32.472654 1092475 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3 aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d 9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14 c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652 7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832 49baf58ec15908cae47539fcabe15ec5881d91984f515fc8893d1e5ba4fa945a e27349b47eb29995d13e036904ad2489fbdd0158ab94dd5af41f5b54a54b23b4 3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab dd136db1eb6034e459ff67b5521d6d3b4cdfed3e633f6c912bd853160d4d89af 303dedd41efda4df68fc27ba33a296176fe13089eccec1d1c7c7e0acba2026f4 4c073da8a04365a46710029851ee407893c131ec9a0206f3896ff2e8d7e32023 5e89c055cbdef19b4c97a2491a9b4f8f0543b60a2ec2615f5dceb70f936d7ab9: (5.85764714s)
	W0717 19:43:32.472751 1092475 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3 aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d 9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14 c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652 7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832 49baf58ec15908cae47539fcabe15ec5881d91984f515fc8893d1e5ba4fa945a e27349b47eb29995d13e036904ad2489fbdd0158ab94dd5af41f5b54a54b23b4 3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab dd136db1eb6034e459ff67b5521d6d3b4cdfed3e633f6c912bd853160d4d89af 303dedd41efda4df68fc27ba33a296176fe13089eccec1d1c7c7e0acba2026f4 4c073da8a04365a46710029851ee407893c131ec9a0206f3896ff2e8d7e32023 5e89c055cbdef19b4c97a2491a9b4f8f0543b60a2ec2615f5dceb70f936d7ab9: Proce
ss exited with status 1
	stdout:
	b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3
	aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d
	9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14
	c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652
	7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c
	bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832
	
	stderr:
	time="2023-07-17T19:43:32Z" level=fatal msg="stopping the container \"49baf58ec15908cae47539fcabe15ec5881d91984f515fc8893d1e5ba4fa945a\": rpc error: code = NotFound desc = could not find container \"49baf58ec15908cae47539fcabe15ec5881d91984f515fc8893d1e5ba4fa945a\": container with ID starting with 49baf58ec15908cae47539fcabe15ec5881d91984f515fc8893d1e5ba4fa945a not found: ID does not exist"
	I0717 19:43:32.472817 1092475 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:43:32.518774 1092475 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:43:32.531325 1092475 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 17 19:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jul 17 19:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 17 19:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jul 17 19:42 /etc/kubernetes/scheduler.conf
	
	I0717 19:43:32.531416 1092475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:43:32.544424 1092475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:43:32.557755 1092475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:43:32.570144 1092475 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:43:32.570297 1092475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:43:32.580106 1092475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:43:32.589431 1092475 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:43:32.589513 1092475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:43:32.599445 1092475 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:43:32.609170 1092475 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:43:32.609209 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:43:32.681988 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:43:33.858988 1092475 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.176961226s)
	I0717 19:43:33.859017 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:43:34.156902 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:43:34.278216 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:43:34.573068 1092475 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:43:34.573159 1092475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:43:35.138818 1092475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:43:34.048386 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .Start
	I0717 19:43:34.048775 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Ensuring networks are active...
	I0717 19:43:34.049942 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Ensuring network default is active
	I0717 19:43:34.050471 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Ensuring network mk-kubernetes-upgrade-852374 is active
	I0717 19:43:34.050867 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Getting domain xml...
	I0717 19:43:34.051675 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Creating domain...
	I0717 19:43:35.531075 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Waiting to get IP...
	I0717 19:43:35.532528 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:35.533091 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:35.533381 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:35.533250 1092866 retry.go:31] will retry after 235.204056ms: waiting for machine to come up
	I0717 19:43:35.769906 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:35.770401 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:35.770436 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:35.770363 1092866 retry.go:31] will retry after 354.76412ms: waiting for machine to come up
	I0717 19:43:36.127320 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:36.127912 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:36.128014 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:36.127973 1092866 retry.go:31] will retry after 442.510798ms: waiting for machine to come up
	I0717 19:43:36.572712 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:36.573501 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:36.573553 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:36.573408 1092866 retry.go:31] will retry after 578.01041ms: waiting for machine to come up
	I0717 19:43:37.153768 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:37.154597 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:37.154789 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:37.154697 1092866 retry.go:31] will retry after 661.501272ms: waiting for machine to come up
	I0717 19:43:37.817547 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:37.818326 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:37.818356 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:37.818260 1092866 retry.go:31] will retry after 890.152289ms: waiting for machine to come up
	I0717 19:43:38.710117 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:38.710601 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:38.710633 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:38.710499 1092866 retry.go:31] will retry after 1.066533307s: waiting for machine to come up
	I0717 19:43:35.638722 1092475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:43:36.138806 1092475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:43:36.638971 1092475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:43:36.679553 1092475 api_server.go:72] duration metric: took 2.106485099s to wait for apiserver process to appear ...
	I0717 19:43:36.679586 1092475 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:43:36.679610 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:36.680344 1092475 api_server.go:269] stopped: https://192.168.61.161:8443/healthz: Get "https://192.168.61.161:8443/healthz": dial tcp 192.168.61.161:8443: connect: connection refused
	I0717 19:43:37.180936 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:39.778814 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:39.779311 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:39.779342 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:39.779272 1092866 retry.go:31] will retry after 925.553521ms: waiting for machine to come up
	I0717 19:43:40.706910 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:40.707436 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:40.707457 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:40.707384 1092866 retry.go:31] will retry after 1.463072592s: waiting for machine to come up
	I0717 19:43:42.172003 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:42.172463 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:42.172493 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:42.172376 1092866 retry.go:31] will retry after 1.893228748s: waiting for machine to come up
	I0717 19:43:42.181609 1092475 api_server.go:269] stopped: https://192.168.61.161:8443/healthz: Get "https://192.168.61.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 19:43:42.181662 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:42.195579 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:43:42.195627 1092475 api_server.go:103] status: https://192.168.61.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:43:42.195643 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:42.206747 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:43:42.206787 1092475 api_server.go:103] status: https://192.168.61.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:43:42.681383 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:42.690794 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:43:42.690839 1092475 api_server.go:103] status: https://192.168.61.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:43:43.181401 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:43.434093 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:43:43.434148 1092475 api_server.go:103] status: https://192.168.61.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:43:43.680501 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:43.708077 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:43:43.708119 1092475 api_server.go:103] status: https://192.168.61.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:43:44.180578 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:44.189711 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:43:44.189756 1092475 api_server.go:103] status: https://192.168.61.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:43:44.680989 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:44.694918 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 200:
	ok
	I0717 19:43:44.720512 1092475 api_server.go:141] control plane version: v1.27.3
	I0717 19:43:44.720608 1092475 api_server.go:131] duration metric: took 8.041013212s to wait for apiserver health ...
	I0717 19:43:44.720631 1092475 cni.go:84] Creating CNI manager for ""
	I0717 19:43:44.720655 1092475 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:43:44.723277 1092475 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:43:44.725521 1092475 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:43:44.739760 1092475 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:43:44.768419 1092475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:43:44.784903 1092475 system_pods.go:59] 6 kube-system pods found
	I0717 19:43:44.784964 1092475 system_pods.go:61] "coredns-5d78c9869d-wqtgn" [751e7e2a-ac16-4ed3-a2a2-525707d4d84d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:43:44.784984 1092475 system_pods.go:61] "etcd-pause-882959" [78ec428a-348c-4a75-8ec7-da945774031b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:43:44.784995 1092475 system_pods.go:61] "kube-apiserver-pause-882959" [9d878a73-d7a9-41d1-82e3-f0f10b1294b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:43:44.785006 1092475 system_pods.go:61] "kube-controller-manager-pause-882959" [7e8f2b84-b012-4c12-9f0f-b09e5a5b2a41] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:43:44.785065 1092475 system_pods.go:61] "kube-proxy-zfl75" [e585a501-1534-4a6d-8c94-fbcb8e24cad2] Running
	I0717 19:43:44.785081 1092475 system_pods.go:61] "kube-scheduler-pause-882959" [ed96358e-3d5e-4038-bd60-2ce64193f430] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:43:44.785089 1092475 system_pods.go:74] duration metric: took 16.639092ms to wait for pod list to return data ...
	I0717 19:43:44.785099 1092475 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:43:44.795837 1092475 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:43:44.795942 1092475 node_conditions.go:123] node cpu capacity is 2
	I0717 19:43:44.795990 1092475 node_conditions.go:105] duration metric: took 10.859648ms to run NodePressure ...
	I0717 19:43:44.796031 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:43:45.209627 1092475 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:43:45.221366 1092475 kubeadm.go:787] kubelet initialised
	I0717 19:43:45.221459 1092475 kubeadm.go:788] duration metric: took 11.735225ms waiting for restarted kubelet to initialise ...
	I0717 19:43:45.221482 1092475 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:43:45.243781 1092475 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:44.067485 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:44.068259 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:44.068300 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:44.068185 1092866 retry.go:31] will retry after 2.487027797s: waiting for machine to come up
	I0717 19:43:46.558182 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:46.558684 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:46.558708 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:46.558627 1092866 retry.go:31] will retry after 2.657681621s: waiting for machine to come up
	I0717 19:43:47.282743 1092475 pod_ready.go:102] pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace has status "Ready":"False"
	I0717 19:43:47.782819 1092475 pod_ready.go:92] pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:47.782848 1092475 pod_ready.go:81] duration metric: took 2.538958892s waiting for pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:47.782859 1092475 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:49.805264 1092475 pod_ready.go:102] pod "etcd-pause-882959" in "kube-system" namespace has status "Ready":"False"
	I0717 19:43:49.218159 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:49.218801 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:49.218834 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:49.218725 1092866 retry.go:31] will retry after 3.961484205s: waiting for machine to come up
	I0717 19:43:53.182878 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:53.183452 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:53.183484 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:53.183409 1092866 retry.go:31] will retry after 3.82827494s: waiting for machine to come up
	I0717 19:43:52.299490 1092475 pod_ready.go:102] pod "etcd-pause-882959" in "kube-system" namespace has status "Ready":"False"
	I0717 19:43:54.301113 1092475 pod_ready.go:102] pod "etcd-pause-882959" in "kube-system" namespace has status "Ready":"False"
	I0717 19:43:56.516618 1092475 pod_ready.go:102] pod "etcd-pause-882959" in "kube-system" namespace has status "Ready":"False"
	I0717 19:43:57.305184 1092475 pod_ready.go:92] pod "etcd-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:57.305220 1092475 pod_ready.go:81] duration metric: took 9.52235189s waiting for pod "etcd-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.305237 1092475 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.312124 1092475 pod_ready.go:92] pod "kube-apiserver-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:57.312159 1092475 pod_ready.go:81] duration metric: took 6.91274ms waiting for pod "kube-apiserver-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.312175 1092475 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.318738 1092475 pod_ready.go:92] pod "kube-controller-manager-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:57.318762 1092475 pod_ready.go:81] duration metric: took 6.578623ms waiting for pod "kube-controller-manager-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.318772 1092475 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zfl75" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.325497 1092475 pod_ready.go:92] pod "kube-proxy-zfl75" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:57.325519 1092475 pod_ready.go:81] duration metric: took 6.741343ms waiting for pod "kube-proxy-zfl75" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.325530 1092475 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.333049 1092475 pod_ready.go:92] pod "kube-scheduler-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:57.333076 1092475 pod_ready.go:81] duration metric: took 7.539683ms waiting for pod "kube-scheduler-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.333086 1092475 pod_ready.go:38] duration metric: took 12.111584234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:43:57.333114 1092475 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:43:57.355608 1092475 ops.go:34] apiserver oom_adj: -16
	I0717 19:43:57.355640 1092475 kubeadm.go:640] restartCluster took 40.835873846s
	I0717 19:43:57.355654 1092475 kubeadm.go:406] StartCluster complete in 40.930456599s
	I0717 19:43:57.355710 1092475 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:43:57.355817 1092475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:43:57.356572 1092475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:43:57.356890 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:43:57.357056 1092475 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:43:57.361136 1092475 out.go:177] * Enabled addons: 
	I0717 19:43:57.357292 1092475 config.go:182] Loaded profile config "pause-882959": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:43:57.357608 1092475 kapi.go:59] client config for pause-882959: &rest.Config{Host:"https://192.168.61.161:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/pause-882959/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/pause-882959/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProto
s:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:43:57.363218 1092475 addons.go:502] enable addons completed in 6.157731ms: enabled=[]
	I0717 19:43:57.365667 1092475 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-882959" context rescaled to 1 replicas
	I0717 19:43:57.365719 1092475 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.161 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:43:57.368268 1092475 out.go:177] * Verifying Kubernetes components...
	I0717 19:43:57.014881 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.015556 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Found IP for machine: 192.168.72.32
	I0717 19:43:57.015588 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Reserving static IP address...
	I0717 19:43:57.015603 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has current primary IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.016189 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-852374", mac: "52:54:00:c4:25:2e", ip: "192.168.72.32"} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.016246 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Reserved static IP address: 192.168.72.32
	I0717 19:43:57.016265 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | skip adding static IP to network mk-kubernetes-upgrade-852374 - found existing host DHCP lease matching {name: "kubernetes-upgrade-852374", mac: "52:54:00:c4:25:2e", ip: "192.168.72.32"}
	I0717 19:43:57.016286 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | Getting to WaitForSSH function...
	I0717 19:43:57.016305 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Waiting for SSH to be available...
	I0717 19:43:57.018474 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.018892 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.018926 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.019213 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | Using SSH client type: external
	I0717 19:43:57.019250 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kubernetes-upgrade-852374/id_rsa (-rw-------)
	I0717 19:43:57.019291 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kubernetes-upgrade-852374/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:43:57.019311 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | About to run SSH command:
	I0717 19:43:57.019330 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | exit 0
	I0717 19:43:57.110302 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | SSH cmd err, output: <nil>: 
	I0717 19:43:57.110798 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetConfigRaw
	I0717 19:43:57.111651 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetIP
	I0717 19:43:57.114415 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.114778 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.114819 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.115001 1092821 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kubernetes-upgrade-852374/config.json ...
	I0717 19:43:57.115233 1092821 machine.go:88] provisioning docker machine ...
	I0717 19:43:57.115262 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:57.115533 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetMachineName
	I0717 19:43:57.115732 1092821 buildroot.go:166] provisioning hostname "kubernetes-upgrade-852374"
	I0717 19:43:57.115765 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetMachineName
	I0717 19:43:57.115978 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:57.118091 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.118407 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.118442 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.118590 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:57.118822 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.119041 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.119216 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:57.119383 1092821 main.go:141] libmachine: Using SSH client type: native
	I0717 19:43:57.119871 1092821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0717 19:43:57.119886 1092821 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-852374 && echo "kubernetes-upgrade-852374" | sudo tee /etc/hostname
	I0717 19:43:57.244123 1092821 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-852374
	
	I0717 19:43:57.244158 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:57.247418 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.247822 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.247864 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.248047 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:57.248272 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.248477 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.248603 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:57.248824 1092821 main.go:141] libmachine: Using SSH client type: native
	I0717 19:43:57.249294 1092821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0717 19:43:57.249318 1092821 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-852374' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-852374/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-852374' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:43:57.374003 1092821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:43:57.374038 1092821 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:43:57.374065 1092821 buildroot.go:174] setting up certificates
	I0717 19:43:57.374077 1092821 provision.go:83] configureAuth start
	I0717 19:43:57.374089 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetMachineName
	I0717 19:43:57.374447 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetIP
	I0717 19:43:57.377595 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.378152 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.378188 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.378673 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:57.381688 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.382171 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.382207 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.382365 1092821 provision.go:138] copyHostCerts
	I0717 19:43:57.382438 1092821 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:43:57.382452 1092821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:43:57.382554 1092821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:43:57.382673 1092821 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:43:57.382687 1092821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:43:57.382722 1092821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:43:57.382797 1092821 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:43:57.382809 1092821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:43:57.382842 1092821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:43:57.382989 1092821 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-852374 san=[192.168.72.32 192.168.72.32 localhost 127.0.0.1 minikube kubernetes-upgrade-852374]
	I0717 19:43:57.710036 1092821 provision.go:172] copyRemoteCerts
	I0717 19:43:57.710119 1092821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:43:57.710158 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:57.712981 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.713367 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.713402 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.713666 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:57.713942 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.714256 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:57.714501 1092821 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kubernetes-upgrade-852374/id_rsa Username:docker}
	I0717 19:43:57.800083 1092821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 19:43:57.827867 1092821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:43:57.854407 1092821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:43:57.879563 1092821 provision.go:86] duration metric: configureAuth took 505.471298ms
	I0717 19:43:57.879595 1092821 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:43:57.879775 1092821 config.go:182] Loaded profile config "kubernetes-upgrade-852374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:43:57.879871 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:57.882921 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.883352 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.883390 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.883544 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:57.883713 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.883918 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.884071 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:57.884257 1092821 main.go:141] libmachine: Using SSH client type: native
	I0717 19:43:57.884861 1092821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0717 19:43:57.884888 1092821 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:43:58.212045 1092821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:43:58.212082 1092821 machine.go:91] provisioned docker machine in 1.096832354s
	I0717 19:43:58.212097 1092821 start.go:300] post-start starting for "kubernetes-upgrade-852374" (driver="kvm2")
	I0717 19:43:58.212110 1092821 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:43:58.212136 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:58.212578 1092821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:43:58.212618 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:58.215700 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.216169 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:58.216207 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.216394 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:58.216602 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:58.216773 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:58.216957 1092821 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kubernetes-upgrade-852374/id_rsa Username:docker}
	I0717 19:43:58.305109 1092821 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:43:58.310287 1092821 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:43:58.310324 1092821 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:43:58.310488 1092821 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:43:58.310652 1092821 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:43:58.310780 1092821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:43:58.320383 1092821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:43:58.347461 1092821 start.go:303] post-start completed in 135.340802ms
	I0717 19:43:58.347499 1092821 fix.go:56] fixHost completed within 24.324501894s
	I0717 19:43:58.347529 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:58.351205 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.351703 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:58.351745 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.351951 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:58.352169 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:58.352422 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:58.352634 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:58.352863 1092821 main.go:141] libmachine: Using SSH client type: native
	I0717 19:43:58.353328 1092821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0717 19:43:58.353346 1092821 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:43:58.466610 1092821 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623038.406717258
	
	I0717 19:43:58.466633 1092821 fix.go:206] guest clock: 1689623038.406717258
	I0717 19:43:58.466643 1092821 fix.go:219] Guest: 2023-07-17 19:43:58.406717258 +0000 UTC Remote: 2023-07-17 19:43:58.347503617 +0000 UTC m=+24.512602427 (delta=59.213641ms)
	I0717 19:43:58.466670 1092821 fix.go:190] guest clock delta is within tolerance: 59.213641ms
	I0717 19:43:58.466677 1092821 start.go:83] releasing machines lock for "kubernetes-upgrade-852374", held for 24.44372546s
	I0717 19:43:58.466704 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:58.467028 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetIP
	I0717 19:43:58.470079 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.470439 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:58.470478 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.470663 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:58.471275 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:58.471472 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:58.471549 1092821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:43:58.471600 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:58.471712 1092821 ssh_runner.go:195] Run: cat /version.json
	I0717 19:43:58.471737 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:58.474326 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.474475 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.474727 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:58.474775 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.474831 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:58.474860 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.474875 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:58.475090 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:58.475095 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:58.475290 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:58.475307 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:58.475438 1092821 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kubernetes-upgrade-852374/id_rsa Username:docker}
	I0717 19:43:58.475506 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:58.475610 1092821 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kubernetes-upgrade-852374/id_rsa Username:docker}
	W0717 19:43:58.555637 1092821 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:43:58.555730 1092821 ssh_runner.go:195] Run: systemctl --version
	I0717 19:43:58.585035 1092821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:43:58.739160 1092821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:43:58.747084 1092821 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:43:58.747192 1092821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:43:58.763495 1092821 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:43:58.763528 1092821 start.go:469] detecting cgroup driver to use...
	I0717 19:43:58.763598 1092821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:43:58.778393 1092821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:43:58.792251 1092821 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:43:58.792318 1092821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:43:58.805464 1092821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:43:58.819247 1092821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:43:58.941594 1092821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:43:59.086847 1092821 docker.go:212] disabling docker service ...
	I0717 19:43:59.086939 1092821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:43:59.103870 1092821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:43:59.118485 1092821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:43:59.253200 1092821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:43:59.391215 1092821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:43:59.404196 1092821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:43:59.425769 1092821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:43:59.425824 1092821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:43:59.436690 1092821 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:43:59.436775 1092821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:43:59.448934 1092821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:43:59.460118 1092821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:43:59.470731 1092821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:43:59.481661 1092821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:43:59.491714 1092821 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:43:59.491788 1092821 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:43:59.507269 1092821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:43:59.519754 1092821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:43:59.648445 1092821 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:43:59.867784 1092821 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:43:59.867866 1092821 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:43:59.875505 1092821 start.go:537] Will wait 60s for crictl version
	I0717 19:43:59.875580 1092821 ssh_runner.go:195] Run: which crictl
	I0717 19:43:59.880545 1092821 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:43:59.916914 1092821 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:43:59.917011 1092821 ssh_runner.go:195] Run: crio --version
	I0717 19:43:59.975577 1092821 ssh_runner.go:195] Run: crio --version
	I0717 19:44:00.028632 1092821 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:43:57.370076 1092475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:43:57.467370 1092475 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:43:57.467369 1092475 node_ready.go:35] waiting up to 6m0s for node "pause-882959" to be "Ready" ...
	I0717 19:43:57.496510 1092475 node_ready.go:49] node "pause-882959" has status "Ready":"True"
	I0717 19:43:57.496545 1092475 node_ready.go:38] duration metric: took 29.138971ms waiting for node "pause-882959" to be "Ready" ...
	I0717 19:43:57.496557 1092475 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:43:57.698827 1092475 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:58.095689 1092475 pod_ready.go:92] pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:58.095719 1092475 pod_ready.go:81] duration metric: took 396.855147ms waiting for pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:58.095730 1092475 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:58.496481 1092475 pod_ready.go:92] pod "etcd-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:58.496511 1092475 pod_ready.go:81] duration metric: took 400.775096ms waiting for pod "etcd-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:58.496526 1092475 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:58.897434 1092475 pod_ready.go:92] pod "kube-apiserver-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:58.897459 1092475 pod_ready.go:81] duration metric: took 400.926711ms waiting for pod "kube-apiserver-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:58.897472 1092475 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:59.296907 1092475 pod_ready.go:92] pod "kube-controller-manager-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:59.296946 1092475 pod_ready.go:81] duration metric: took 399.465164ms waiting for pod "kube-controller-manager-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:59.296962 1092475 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zfl75" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:59.696699 1092475 pod_ready.go:92] pod "kube-proxy-zfl75" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:59.696727 1092475 pod_ready.go:81] duration metric: took 399.756797ms waiting for pod "kube-proxy-zfl75" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:59.696736 1092475 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:44:00.096858 1092475 pod_ready.go:92] pod "kube-scheduler-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:44:00.096886 1092475 pod_ready.go:81] duration metric: took 400.143388ms waiting for pod "kube-scheduler-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:44:00.096895 1092475 pod_ready.go:38] duration metric: took 2.600327122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:44:00.096912 1092475 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:44:00.096954 1092475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:44:00.117678 1092475 api_server.go:72] duration metric: took 2.751917265s to wait for apiserver process to appear ...
	I0717 19:44:00.117713 1092475 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:44:00.117736 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:44:00.127038 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 200:
	ok
	I0717 19:44:00.128710 1092475 api_server.go:141] control plane version: v1.27.3
	I0717 19:44:00.128745 1092475 api_server.go:131] duration metric: took 11.024265ms to wait for apiserver health ...
	I0717 19:44:00.128757 1092475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:44:00.301327 1092475 system_pods.go:59] 6 kube-system pods found
	I0717 19:44:00.301356 1092475 system_pods.go:61] "coredns-5d78c9869d-wqtgn" [751e7e2a-ac16-4ed3-a2a2-525707d4d84d] Running
	I0717 19:44:00.301361 1092475 system_pods.go:61] "etcd-pause-882959" [78ec428a-348c-4a75-8ec7-da945774031b] Running
	I0717 19:44:00.301365 1092475 system_pods.go:61] "kube-apiserver-pause-882959" [9d878a73-d7a9-41d1-82e3-f0f10b1294b6] Running
	I0717 19:44:00.301370 1092475 system_pods.go:61] "kube-controller-manager-pause-882959" [7e8f2b84-b012-4c12-9f0f-b09e5a5b2a41] Running
	I0717 19:44:00.301374 1092475 system_pods.go:61] "kube-proxy-zfl75" [e585a501-1534-4a6d-8c94-fbcb8e24cad2] Running
	I0717 19:44:00.301378 1092475 system_pods.go:61] "kube-scheduler-pause-882959" [ed96358e-3d5e-4038-bd60-2ce64193f430] Running
	I0717 19:44:00.301383 1092475 system_pods.go:74] duration metric: took 172.619982ms to wait for pod list to return data ...
	I0717 19:44:00.301392 1092475 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:44:00.498216 1092475 default_sa.go:45] found service account: "default"
	I0717 19:44:00.498252 1092475 default_sa.go:55] duration metric: took 196.854112ms for default service account to be created ...
	I0717 19:44:00.498266 1092475 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:44:00.701540 1092475 system_pods.go:86] 6 kube-system pods found
	I0717 19:44:00.701602 1092475 system_pods.go:89] "coredns-5d78c9869d-wqtgn" [751e7e2a-ac16-4ed3-a2a2-525707d4d84d] Running
	I0717 19:44:00.701610 1092475 system_pods.go:89] "etcd-pause-882959" [78ec428a-348c-4a75-8ec7-da945774031b] Running
	I0717 19:44:00.701615 1092475 system_pods.go:89] "kube-apiserver-pause-882959" [9d878a73-d7a9-41d1-82e3-f0f10b1294b6] Running
	I0717 19:44:00.701621 1092475 system_pods.go:89] "kube-controller-manager-pause-882959" [7e8f2b84-b012-4c12-9f0f-b09e5a5b2a41] Running
	I0717 19:44:00.701626 1092475 system_pods.go:89] "kube-proxy-zfl75" [e585a501-1534-4a6d-8c94-fbcb8e24cad2] Running
	I0717 19:44:00.701631 1092475 system_pods.go:89] "kube-scheduler-pause-882959" [ed96358e-3d5e-4038-bd60-2ce64193f430] Running
	I0717 19:44:00.701640 1092475 system_pods.go:126] duration metric: took 203.366821ms to wait for k8s-apps to be running ...
	I0717 19:44:00.701647 1092475 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:44:00.701700 1092475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:44:00.727411 1092475 system_svc.go:56] duration metric: took 25.747782ms WaitForService to wait for kubelet.
	I0717 19:44:00.727449 1092475 kubeadm.go:581] duration metric: took 3.361697217s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 19:44:00.727474 1092475 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:44:00.899324 1092475 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:44:00.899364 1092475 node_conditions.go:123] node cpu capacity is 2
	I0717 19:44:00.899382 1092475 node_conditions.go:105] duration metric: took 171.900278ms to run NodePressure ...
	I0717 19:44:00.899398 1092475 start.go:228] waiting for startup goroutines ...
	I0717 19:44:00.899407 1092475 start.go:233] waiting for cluster config update ...
	I0717 19:44:00.899423 1092475 start.go:242] writing updated cluster config ...
	I0717 19:44:00.899860 1092475 ssh_runner.go:195] Run: rm -f paused
	I0717 19:44:00.984206 1092475 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 19:44:00.987356 1092475 out.go:177] * Done! kubectl is now configured to use "pause-882959" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 19:42:04 UTC, ends at Mon 2023-07-17 19:44:02 UTC. --
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.229219002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d14377fe-edf6-4044-8b1b-5ccea0dadcb5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.229319999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d14377fe-edf6-4044-8b1b-5ccea0dadcb5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.229658305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d14377fe-edf6-4044-8b1b-5ccea0dadcb5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.272859773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2f3cf33b-2680-478b-b1f7-f4402bdee64b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.272930399Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2f3cf33b-2680-478b-b1f7-f4402bdee64b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.273312828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2f3cf33b-2680-478b-b1f7-f4402bdee64b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.330391646Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4a29721d-6835-4047-9c7c-ad36ec057512 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.330615512Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-wqtgn,Uid:751e7e2a-ac16-4ed3-a2a2-525707d4d84d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689622997236322714,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T19:42:57.332358038Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&PodSandboxMetadata{Name:kube-proxy-zfl75,Uid:e585a501-1534-4a6d-8c94-fbcb8e24cad2,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1689622997223134948,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T19:42:57.239626388Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&PodSandboxMetadata{Name:etcd-pause-882959,Uid:630e68b7fc44a2b8708830cc8b87ff6d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689622997187713435,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
etcd.advertise-client-urls: https://192.168.61.161:2379,kubernetes.io/config.hash: 630e68b7fc44a2b8708830cc8b87ff6d,kubernetes.io/config.seen: 2023-07-17T19:42:42.435260652Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-882959,Uid:87f90ddf48f9fe6fe40a497facd9340e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689622997150151194,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.161:8443,kubernetes.io/config.hash: 87f90ddf48f9fe6fe40a497facd9340e,kubernetes.io/config.seen: 2023-07-17T19:42:42.435264865Z,kubernetes.io/config.source: file,},RuntimeH
andler:,},&PodSandbox{Id:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-882959,Uid:2ee4f8d565bd40bb276359f9a3316e30,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689622997121390059,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2ee4f8d565bd40bb276359f9a3316e30,kubernetes.io/config.seen: 2023-07-17T19:42:42.435266816Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-882959,Uid:146c3f581a9f3949700e695b352faa81,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689622997080413997,Labels:map[string]str
ing{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 146c3f581a9f3949700e695b352faa81,kubernetes.io/config.seen: 2023-07-17T19:42:42.435266110Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=4a29721d-6835-4047-9c7c-ad36ec057512 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.331591409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e7dddc34-b990-48ad-a001-2a1662e5a8ad name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.331682611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e7dddc34-b990-48ad-a001-2a1662e5a8ad name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.331936948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e7dddc34-b990-48ad-a001-2a1662e5a8ad name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.334624079Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=882429de-6048-4280-b830-87841c3ff444 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.334692237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=882429de-6048-4280-b830-87841c3ff444 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.334972436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=882429de-6048-4280-b830-87841c3ff444 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.392466442Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=758b97a8-f2b9-47b3-aca4-d73c13331960 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.392542437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=758b97a8-f2b9-47b3-aca4-d73c13331960 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.392860500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=758b97a8-f2b9-47b3-aca4-d73c13331960 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.444482091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eb523230-09b2-48dc-830f-21ca4946097b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.444556328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eb523230-09b2-48dc-830f-21ca4946097b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.444890420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eb523230-09b2-48dc-830f-21ca4946097b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.484965231Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="go-grpc-middleware/chain.go:25" id=5ec85750-2b4a-4349-a9fd-5f9e0dd1473f name=/runtime.v1.RuntimeService/Version
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.485167341Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5ec85750-2b4a-4349-a9fd-5f9e0dd1473f name=/runtime.v1.RuntimeService/Version
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.498427756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5a8ac1e1-075a-461c-a63a-6afedece8216 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.498525364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5a8ac1e1-075a-461c-a63a-6afedece8216 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:02 pause-882959 crio[2539]: time="2023-07-17 19:44:02.498961412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5a8ac1e1-075a-461c-a63a-6afedece8216 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	4651fcefee21d       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   19 seconds ago       Running             kube-proxy                2                   c2d840626c679
	2ddeef059aeac       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   19 seconds ago       Running             coredns                   2                   578a3416f5d89
	552f85e3417ed       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   27 seconds ago       Running             kube-scheduler            2                   8875318cf4400
	86f6c76b47d06       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   27 seconds ago       Running             kube-controller-manager   2                   04d6be588f83a
	3f5a361747df7       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   27 seconds ago       Running             kube-apiserver            3                   d4ad4d6790f9a
	65e86b0e324b8       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   27 seconds ago       Running             etcd                      2                   23bf275989aa0
	b1d1ae4fbdbc1       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   36 seconds ago       Exited              kube-apiserver            2                   d4ad4d6790f9a
	aea020bea67e6       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   41 seconds ago       Exited              kube-proxy                1                   c2d840626c679
	9e713b911a483       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   43 seconds ago       Exited              coredns                   1                   578a3416f5d89
	c720ae6c03731       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   43 seconds ago       Exited              kube-scheduler            1                   8875318cf4400
	7dd7d620012bd       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   44 seconds ago       Exited              etcd                      1                   23bf275989aa0
	bf4ef96d6cd5c       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   44 seconds ago       Exited              kube-controller-manager   1                   04d6be588f83a
	3cd806479e841       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   0                   dae5cce88bd57
	
	* 
	* ==> coredns [2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41893 - 12192 "HINFO IN 5125562647933031373.447300645467008570. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010659901s
	
	* 
	* ==> coredns [3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:33241 - 31033 "HINFO IN 1596802302159776750.6556267873721546673. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00918775s
	
	* 
	* ==> coredns [9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34002 - 47690 "HINFO IN 6248114585339225492.101746283056818754. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009156952s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-882959
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-882959
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=pause-882959
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T19_42_42_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:42:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-882959
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 19:44:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 19:43:42 +0000   Mon, 17 Jul 2023 19:42:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 19:43:42 +0000   Mon, 17 Jul 2023 19:42:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 19:43:42 +0000   Mon, 17 Jul 2023 19:42:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 19:43:42 +0000   Mon, 17 Jul 2023 19:42:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.161
	  Hostname:    pause-882959
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 31e8a3cc94384d26a2c6226e22e3aa53
	  System UUID:                31e8a3cc-9438-4d26-a2c6-226e22e3aa53
	  Boot ID:                    20705618-4604-465f-9569-f9fd101ca5e3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-wqtgn                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     65s
	  kube-system                 etcd-pause-882959                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         80s
	  kube-system                 kube-apiserver-pause-882959             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-controller-manager-pause-882959    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-proxy-zfl75                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-scheduler-pause-882959             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 63s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     90s (x7 over 90s)  kubelet          Node pause-882959 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    90s (x8 over 90s)  kubelet          Node pause-882959 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  90s (x8 over 90s)  kubelet          Node pause-882959 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                80s                kubelet          Node pause-882959 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  80s                kubelet          Node pause-882959 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s                kubelet          Node pause-882959 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s                kubelet          Node pause-882959 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           68s                node-controller  Node pause-882959 event: Registered Node pause-882959 in Controller
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27s (x8 over 28s)  kubelet          Node pause-882959 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 28s)  kubelet          Node pause-882959 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 28s)  kubelet          Node pause-882959 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6s                 node-controller  Node pause-882959 event: Registered Node pause-882959 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075816] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul17 19:42] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.662238] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142880] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.099625] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000093] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.183058] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.131327] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.159085] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.124017] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.321524] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[ +12.540847] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[ +10.870742] systemd-fstab-generator[1269]: Ignoring "noauto" for root device
	[Jul17 19:43] systemd-fstab-generator[2166]: Ignoring "noauto" for root device
	[  +0.186600] systemd-fstab-generator[2177]: Ignoring "noauto" for root device
	[  +0.243330] kauditd_printk_skb: 24 callbacks suppressed
	[  +0.389091] systemd-fstab-generator[2322]: Ignoring "noauto" for root device
	[  +0.376788] systemd-fstab-generator[2384]: Ignoring "noauto" for root device
	[  +0.566955] systemd-fstab-generator[2409]: Ignoring "noauto" for root device
	[  +3.780056] kauditd_printk_skb: 3 callbacks suppressed
	[ +17.041347] systemd-fstab-generator[3504]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8] <==
	* {"level":"info","ts":"2023-07-17T19:43:43.403Z","caller":"traceutil/trace.go:171","msg":"trace[463496343] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"207.720558ms","start":"2023-07-17T19:43:43.195Z","end":"2023-07-17T19:43:43.403Z","steps":["trace[463496343] 'process raft request'  (duration: 138.054469ms)","trace[463496343] 'compare'  (duration: 68.052671ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T19:43:43.404Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.495645ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T19:43:43.405Z","caller":"traceutil/trace.go:171","msg":"trace[941357097] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:437; }","duration":"205.940851ms","start":"2023-07-17T19:43:43.199Z","end":"2023-07-17T19:43:43.405Z","steps":["trace[941357097] 'agreement among raft nodes before linearized reading'  (duration: 205.054485ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:43.406Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.748623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" ","response":"range_response_count:50 size:35108"}
	{"level":"info","ts":"2023-07-17T19:43:43.406Z","caller":"traceutil/trace.go:171","msg":"trace[731044362] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:50; response_revision:437; }","duration":"207.170319ms","start":"2023-07-17T19:43:43.199Z","end":"2023-07-17T19:43:43.406Z","steps":["trace[731044362] 'agreement among raft nodes before linearized reading'  (duration: 206.348356ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:43.410Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.034799ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T19:43:43.411Z","caller":"traceutil/trace.go:171","msg":"trace[2142756550] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:437; }","duration":"188.385589ms","start":"2023-07-17T19:43:43.222Z","end":"2023-07-17T19:43:43.411Z","steps":["trace[2142756550] 'agreement among raft nodes before linearized reading'  (duration: 187.917153ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:43.412Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.546458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5d78c9869d-9m5pz\" ","response":"range_response_count:1 size:4610"}
	{"level":"info","ts":"2023-07-17T19:43:43.412Z","caller":"traceutil/trace.go:171","msg":"trace[873570816] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5d78c9869d-9m5pz; range_end:; response_count:1; response_revision:437; }","duration":"209.043546ms","start":"2023-07-17T19:43:43.203Z","end":"2023-07-17T19:43:43.412Z","steps":["trace[873570816] 'agreement among raft nodes before linearized reading'  (duration: 208.392148ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:43.414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.692156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-cluster-critical\" ","response":"range_response_count:1 size:477"}
	{"level":"info","ts":"2023-07-17T19:43:43.417Z","caller":"traceutil/trace.go:171","msg":"trace[1530418752] range","detail":"{range_begin:/registry/priorityclasses/system-cluster-critical; range_end:; response_count:1; response_revision:437; }","duration":"217.729768ms","start":"2023-07-17T19:43:43.199Z","end":"2023-07-17T19:43:43.417Z","steps":["trace[1530418752] 'agreement among raft nodes before linearized reading'  (duration: 214.624685ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T19:43:51.571Z","caller":"traceutil/trace.go:171","msg":"trace[234472061] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"303.526964ms","start":"2023-07-17T19:43:51.267Z","end":"2023-07-17T19:43:51.571Z","steps":["trace[234472061] 'process raft request'  (duration: 303.349398ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:51.571Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.594151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-882959\" ","response":"range_response_count:1 size:5478"}
	{"level":"info","ts":"2023-07-17T19:43:51.571Z","caller":"traceutil/trace.go:171","msg":"trace[2019979269] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-882959; range_end:; response_count:1; response_revision:472; }","duration":"280.734286ms","start":"2023-07-17T19:43:51.290Z","end":"2023-07-17T19:43:51.571Z","steps":["trace[2019979269] 'agreement among raft nodes before linearized reading'  (duration: 280.493047ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:51.571Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:43:51.267Z","time spent":"304.052072ms","remote":"127.0.0.1:36360","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4376,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-882959\" mod_revision:419 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-882959\" value_size:4314 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-882959\" > >"}
	{"level":"info","ts":"2023-07-17T19:43:51.571Z","caller":"traceutil/trace.go:171","msg":"trace[2036788317] linearizableReadLoop","detail":"{readStateIndex:506; appliedIndex:506; }","duration":"280.439024ms","start":"2023-07-17T19:43:51.290Z","end":"2023-07-17T19:43:51.571Z","steps":["trace[2036788317] 'read index received'  (duration: 280.433314ms)","trace[2036788317] 'applied index is now lower than readState.Index'  (duration: 4.54µs)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T19:43:56.499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.782035ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3423731210247660832 > lease_revoke:<id:2f8389656048dbad>","response":"size:27"}
	{"level":"info","ts":"2023-07-17T19:43:56.499Z","caller":"traceutil/trace.go:171","msg":"trace[2033090678] linearizableReadLoop","detail":"{readStateIndex:520; appliedIndex:519; }","duration":"209.972906ms","start":"2023-07-17T19:43:56.289Z","end":"2023-07-17T19:43:56.499Z","steps":["trace[2033090678] 'read index received'  (duration: 44.973845ms)","trace[2033090678] 'applied index is now lower than readState.Index'  (duration: 164.99758ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T19:43:56.499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.11732ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-882959\" ","response":"range_response_count:1 size:5478"}
	{"level":"info","ts":"2023-07-17T19:43:56.499Z","caller":"traceutil/trace.go:171","msg":"trace[1119367753] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-882959; range_end:; response_count:1; response_revision:483; }","duration":"210.150054ms","start":"2023-07-17T19:43:56.289Z","end":"2023-07-17T19:43:56.499Z","steps":["trace[1119367753] 'agreement among raft nodes before linearized reading'  (duration: 210.061134ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:56.500Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.754398ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T19:43:56.500Z","caller":"traceutil/trace.go:171","msg":"trace[487503170] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:483; }","duration":"150.797973ms","start":"2023-07-17T19:43:56.349Z","end":"2023-07-17T19:43:56.500Z","steps":["trace[487503170] 'agreement among raft nodes before linearized reading'  (duration: 150.650569ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T19:43:56.889Z","caller":"traceutil/trace.go:171","msg":"trace[2002614845] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"373.231263ms","start":"2023-07-17T19:43:56.516Z","end":"2023-07-17T19:43:56.889Z","steps":["trace[2002614845] 'process raft request'  (duration: 342.163559ms)","trace[2002614845] 'compare'  (duration: 30.522504ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T19:43:56.889Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:43:56.516Z","time spent":"373.617112ms","remote":"127.0.0.1:36360","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5463,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-882959\" mod_revision:433 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-882959\" value_size:5411 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-882959\" > >"}
	{"level":"info","ts":"2023-07-17T19:43:57.072Z","caller":"traceutil/trace.go:171","msg":"trace[2051540128] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"168.293702ms","start":"2023-07-17T19:43:56.904Z","end":"2023-07-17T19:43:57.072Z","steps":["trace[2051540128] 'process raft request'  (duration: 84.611605ms)","trace[2051540128] 'compare'  (duration: 83.570465ms)"],"step_count":2}
	
	* 
	* ==> etcd [7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c] <==
	* {"level":"info","ts":"2023-07-17T19:43:20.144Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.161:2380"}
	{"level":"info","ts":"2023-07-17T19:43:20.144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 switched to configuration voters=(4712710697171431299)"}
	{"level":"info","ts":"2023-07-17T19:43:20.144Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"41c4ab09a330aec","local-member-id":"4166e968fa162f83","added-peer-id":"4166e968fa162f83","added-peer-peer-urls":["https://192.168.61.161:2380"]}
	{"level":"info","ts":"2023-07-17T19:43:20.144Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"41c4ab09a330aec","local-member-id":"4166e968fa162f83","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:43:20.144Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 received MsgPreVoteResp from 4166e968fa162f83 at term 2"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 became candidate at term 3"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 received MsgVoteResp from 4166e968fa162f83 at term 3"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 became leader at term 3"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4166e968fa162f83 elected leader 4166e968fa162f83 at term 3"}
	{"level":"info","ts":"2023-07-17T19:43:21.126Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4166e968fa162f83","local-member-attributes":"{Name:pause-882959 ClientURLs:[https://192.168.61.161:2379]}","request-path":"/0/members/4166e968fa162f83/attributes","cluster-id":"41c4ab09a330aec","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T19:43:21.127Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T19:43:21.128Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T19:43:21.128Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.161:2379"}
	{"level":"info","ts":"2023-07-17T19:43:21.128Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T19:43:21.128Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T19:43:21.129Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T19:43:32.293Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-07-17T19:43:32.293Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-882959","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.161:2380"],"advertise-client-urls":["https://192.168.61.161:2379"]}
	{"level":"info","ts":"2023-07-17T19:43:32.296Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4166e968fa162f83","current-leader-member-id":"4166e968fa162f83"}
	{"level":"info","ts":"2023-07-17T19:43:32.301Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.61.161:2380"}
	{"level":"info","ts":"2023-07-17T19:43:32.302Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.61.161:2380"}
	{"level":"info","ts":"2023-07-17T19:43:32.302Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-882959","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.161:2380"],"advertise-client-urls":["https://192.168.61.161:2379"]}
	
	* 
	* ==> kernel <==
	*  19:44:03 up 2 min,  0 users,  load average: 1.58, 0.65, 0.25
	Linux pause-882959 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325] <==
	* I0717 19:43:42.059149       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0717 19:43:42.059248       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0717 19:43:42.159658       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0717 19:43:42.161964       1 aggregator.go:152] initial CRD sync complete...
	I0717 19:43:42.162149       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 19:43:42.162186       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 19:43:42.162215       1 cache.go:39] Caches are synced for autoregister controller
	I0717 19:43:42.162985       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0717 19:43:42.163622       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 19:43:42.164648       1 shared_informer.go:318] Caches are synced for configmaps
	I0717 19:43:42.164746       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0717 19:43:42.169688       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0717 19:43:42.186744       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0717 19:43:42.209259       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0717 19:43:42.209357       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0717 19:43:42.211330       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 19:43:42.633165       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 19:43:43.450551       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 19:43:44.973008       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 19:43:44.994714       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 19:43:45.076358       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 19:43:45.149341       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 19:43:45.174732       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 19:43:56.002516       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 19:43:56.044536       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3] <==
	* 
	* 
	* ==> kube-controller-manager [86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2] <==
	* I0717 19:43:55.975169       1 shared_informer.go:318] Caches are synced for ephemeral
	I0717 19:43:55.981465       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0717 19:43:55.981613       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0717 19:43:55.983565       1 shared_informer.go:318] Caches are synced for node
	I0717 19:43:55.983866       1 range_allocator.go:174] "Sending events to api server"
	I0717 19:43:55.984144       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0717 19:43:55.984153       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0717 19:43:55.984160       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0717 19:43:55.989996       1 shared_informer.go:318] Caches are synced for GC
	I0717 19:43:55.999428       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0717 19:43:56.003933       1 shared_informer.go:318] Caches are synced for disruption
	I0717 19:43:56.020259       1 shared_informer.go:318] Caches are synced for taint
	I0717 19:43:56.020546       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0717 19:43:56.020719       1 shared_informer.go:318] Caches are synced for endpoint
	I0717 19:43:56.020810       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0717 19:43:56.020952       1 taint_manager.go:211] "Sending events to api server"
	I0717 19:43:56.021412       1 shared_informer.go:318] Caches are synced for stateful set
	I0717 19:43:56.021643       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-882959"
	I0717 19:43:56.021806       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0717 19:43:56.021899       1 event.go:307] "Event occurred" object="pause-882959" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-882959 event: Registered Node pause-882959 in Controller"
	I0717 19:43:56.022141       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 19:43:56.022265       1 shared_informer.go:318] Caches are synced for attach detach
	I0717 19:43:56.401199       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 19:43:56.417657       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 19:43:56.417737       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832] <==
	* I0717 19:43:19.844801       1 serving.go:348] Generated self-signed cert in-memory
	I0717 19:43:20.627380       1 controllermanager.go:187] "Starting" version="v1.27.3"
	I0717 19:43:20.627432       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:43:20.629202       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 19:43:20.629329       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 19:43:20.630342       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0717 19:43:20.630426       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0717 19:43:30.633419       1 controllermanager.go:233] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.61.161:8443/healthz\": dial tcp 192.168.61.161:8443: connect: connection refused"
	
	* 
	* ==> kube-proxy [4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c] <==
	* I0717 19:43:44.031240       1 node.go:141] Successfully retrieved node IP: 192.168.61.161
	I0717 19:43:44.031384       1 server_others.go:110] "Detected node IP" address="192.168.61.161"
	I0717 19:43:44.031461       1 server_others.go:554] "Using iptables proxy"
	I0717 19:43:44.204310       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 19:43:44.204566       1 server_others.go:192] "Using iptables Proxier"
	I0717 19:43:44.204797       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:43:44.205944       1 server.go:658] "Version info" version="v1.27.3"
	I0717 19:43:44.206376       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:43:44.207567       1 config.go:188] "Starting service config controller"
	I0717 19:43:44.207659       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 19:43:44.207760       1 config.go:97] "Starting endpoint slice config controller"
	I0717 19:43:44.207795       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 19:43:44.208540       1 config.go:315] "Starting node config controller"
	I0717 19:43:44.208594       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 19:43:44.308405       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 19:43:44.308434       1 shared_informer.go:318] Caches are synced for service config
	I0717 19:43:44.308763       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d] <==
	* E0717 19:43:21.105116       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-882959": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:22.211693       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-882959": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:24.217872       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-882959": dial tcp 192.168.61.161:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b] <==
	* I0717 19:43:39.232436       1 serving.go:348] Generated self-signed cert in-memory
	W0717 19:43:42.100284       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 19:43:42.100495       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:43:42.100617       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 19:43:42.100647       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 19:43:42.183854       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 19:43:42.184044       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:43:42.200186       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:43:42.200329       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:43:42.203016       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 19:43:42.204329       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 19:43:42.306515       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652] <==
	* E0717 19:43:29.144988       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:29.277716       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:29.277881       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:29.286775       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.61.161:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:29.286940       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.161:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:29.752228       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.61.161:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:29.752382       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.61.161:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:29.758424       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:29.758496       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:29.871633       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.61.161:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:29.871729       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.61.161:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:30.517960       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.61.161:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:30.518262       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.61.161:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:30.545764       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.61.161:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:30.545907       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.61.161:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:30.685961       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.61.161:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:30.686180       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.61.161:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:30.996542       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.61.161:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:30.996681       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.61.161:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:31.231840       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.61.161:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:31.231967       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.61.161:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:31.269050       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:31.269276       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	I0717 19:43:32.123137       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0717 19:43:32.123912       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 19:42:04 UTC, ends at Mon 2023-07-17 19:44:03 UTC. --
	Jul 17 19:43:35 pause-882959 kubelet[3510]: E0717 19:43:35.776046    3510 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-882959&limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	Jul 17 19:43:35 pause-882959 kubelet[3510]: I0717 19:43:35.827568    3510 kubelet_node_status.go:70] "Attempting to register node" node="pause-882959"
	Jul 17 19:43:35 pause-882959 kubelet[3510]: E0717 19:43:35.828315    3510 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.161:8443: connect: connection refused" node="pause-882959"
	Jul 17 19:43:37 pause-882959 kubelet[3510]: I0717 19:43:37.430534    3510 kubelet_node_status.go:70] "Attempting to register node" node="pause-882959"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.234244    3510 kubelet_node_status.go:108] "Node was previously registered" node="pause-882959"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.234335    3510 kubelet_node_status.go:73] "Successfully registered node" node="pause-882959"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.237487    3510 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.238674    3510 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.289438    3510 apiserver.go:52] "Watching apiserver"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.294004    3510 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.294329    3510 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.294457    3510 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.307705    3510 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373299    3510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9txg\" (UniqueName: \"kubernetes.io/projected/751e7e2a-ac16-4ed3-a2a2-525707d4d84d-kube-api-access-p9txg\") pod \"coredns-5d78c9869d-wqtgn\" (UID: \"751e7e2a-ac16-4ed3-a2a2-525707d4d84d\") " pod="kube-system/coredns-5d78c9869d-wqtgn"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373401    3510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwjjr\" (UniqueName: \"kubernetes.io/projected/e585a501-1534-4a6d-8c94-fbcb8e24cad2-kube-api-access-xwjjr\") pod \"kube-proxy-zfl75\" (UID: \"e585a501-1534-4a6d-8c94-fbcb8e24cad2\") " pod="kube-system/kube-proxy-zfl75"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373441    3510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751e7e2a-ac16-4ed3-a2a2-525707d4d84d-config-volume\") pod \"coredns-5d78c9869d-wqtgn\" (UID: \"751e7e2a-ac16-4ed3-a2a2-525707d4d84d\") " pod="kube-system/coredns-5d78c9869d-wqtgn"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373483    3510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e585a501-1534-4a6d-8c94-fbcb8e24cad2-kube-proxy\") pod \"kube-proxy-zfl75\" (UID: \"e585a501-1534-4a6d-8c94-fbcb8e24cad2\") " pod="kube-system/kube-proxy-zfl75"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373558    3510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e585a501-1534-4a6d-8c94-fbcb8e24cad2-xtables-lock\") pod \"kube-proxy-zfl75\" (UID: \"e585a501-1534-4a6d-8c94-fbcb8e24cad2\") " pod="kube-system/kube-proxy-zfl75"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373595    3510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e585a501-1534-4a6d-8c94-fbcb8e24cad2-lib-modules\") pod \"kube-proxy-zfl75\" (UID: \"e585a501-1534-4a6d-8c94-fbcb8e24cad2\") " pod="kube-system/kube-proxy-zfl75"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373610    3510 reconciler.go:41] "Reconciler: start to sync state"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.596014    3510 scope.go:115] "RemoveContainer" containerID="9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.598803    3510 scope.go:115] "RemoveContainer" containerID="aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d"
	Jul 17 19:43:44 pause-882959 kubelet[3510]: I0717 19:43:44.332766    3510 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=decefa53-94e0-4dae-a713-5aa724208ceb path="/var/lib/kubelet/pods/decefa53-94e0-4dae-a713-5aa724208ceb/volumes"
	Jul 17 19:43:45 pause-882959 kubelet[3510]: I0717 19:43:45.591405    3510 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jul 17 19:43:47 pause-882959 kubelet[3510]: I0717 19:43:47.593917    3510 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-882959 -n pause-882959
helpers_test.go:261: (dbg) Run:  kubectl --context pause-882959 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-882959 -n pause-882959
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-882959 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-882959 logs -n 25: (4.745450628s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p multinode-464644-m03        | multinode-464644-m03      | jenkins | v1.30.1 | 17 Jul 23 19:34 UTC | 17 Jul 23 19:34 UTC |
	| delete  | -p multinode-464644            | multinode-464644          | jenkins | v1.30.1 | 17 Jul 23 19:34 UTC | 17 Jul 23 19:34 UTC |
	| start   | -p test-preload-585582         | test-preload-585582       | jenkins | v1.30.1 | 17 Jul 23 19:34 UTC | 17 Jul 23 19:37 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true  |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2  |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                           |         |         |                     |                     |
	| image   | test-preload-585582 image pull | test-preload-585582       | jenkins | v1.30.1 | 17 Jul 23 19:37 UTC | 17 Jul 23 19:37 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                           |         |         |                     |                     |
	| stop    | -p test-preload-585582         | test-preload-585582       | jenkins | v1.30.1 | 17 Jul 23 19:37 UTC |                     |
	| delete  | -p test-preload-585582         | test-preload-585582       | jenkins | v1.30.1 | 17 Jul 23 19:39 UTC | 17 Jul 23 19:39 UTC |
	| start   | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:39 UTC | 17 Jul 23 19:40 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC | 17 Jul 23 19:40 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:40 UTC | 17 Jul 23 19:41 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-119343       | scheduled-stop-119343     | jenkins | v1.30.1 | 17 Jul 23 19:41 UTC | 17 Jul 23 19:41 UTC |
	| start   | -p pause-882959 --memory=2048  | pause-882959              | jenkins | v1.30.1 | 17 Jul 23 19:41 UTC | 17 Jul 23 19:43 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-814891         | offline-crio-814891       | jenkins | v1.30.1 | 17 Jul 23 19:41 UTC | 17 Jul 23 19:43 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-852374   | kubernetes-upgrade-852374 | jenkins | v1.30.1 | 17 Jul 23 19:41 UTC | 17 Jul 23 19:43 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-882959                | pause-882959              | jenkins | v1.30.1 | 17 Jul 23 19:43 UTC | 17 Jul 23 19:44 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-814891         | offline-crio-814891       | jenkins | v1.30.1 | 17 Jul 23 19:43 UTC | 17 Jul 23 19:43 UTC |
	| stop    | -p kubernetes-upgrade-852374   | kubernetes-upgrade-852374 | jenkins | v1.30.1 | 17 Jul 23 19:43 UTC | 17 Jul 23 19:43 UTC |
	| start   | -p kubernetes-upgrade-852374   | kubernetes-upgrade-852374 | jenkins | v1.30.1 | 17 Jul 23 19:43 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 19:43:33
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:43:33.884417 1092821 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:43:33.884611 1092821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:43:33.884625 1092821 out.go:309] Setting ErrFile to fd 2...
	I0717 19:43:33.884631 1092821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:43:33.884973 1092821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:43:33.885851 1092821 out.go:303] Setting JSON to false
	I0717 19:43:33.887271 1092821 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15965,"bootTime":1689607049,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:43:33.887372 1092821 start.go:138] virtualization: kvm guest
	I0717 19:43:33.890836 1092821 out.go:177] * [kubernetes-upgrade-852374] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:43:33.893319 1092821 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:43:33.893349 1092821 notify.go:220] Checking for updates...
	I0717 19:43:33.895536 1092821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:43:33.898241 1092821 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:43:33.900244 1092821 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:43:33.901946 1092821 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:43:33.903989 1092821 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:43:33.906296 1092821 config.go:182] Loaded profile config "kubernetes-upgrade-852374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 19:43:33.906660 1092821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:43:33.906746 1092821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:43:33.928006 1092821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37147
	I0717 19:43:33.928651 1092821 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:43:33.929427 1092821 main.go:141] libmachine: Using API Version  1
	I0717 19:43:33.929454 1092821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:43:33.929862 1092821 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:43:33.930101 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:33.930420 1092821 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:43:33.930893 1092821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:43:33.930942 1092821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:43:33.950780 1092821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0717 19:43:33.951299 1092821 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:43:33.951929 1092821 main.go:141] libmachine: Using API Version  1
	I0717 19:43:33.951953 1092821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:43:33.952306 1092821 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:43:33.952507 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:33.995902 1092821 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:43:33.997723 1092821 start.go:298] selected driver: kvm2
	I0717 19:43:33.997748 1092821 start.go:880] validating driver "kvm2" against &{Name:kubernetes-upgrade-852374 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubern
etes-upgrade-852374 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:43:33.997911 1092821 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:43:33.998934 1092821 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:43:33.999054 1092821 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:43:34.015977 1092821 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0717 19:43:34.016501 1092821 cni.go:84] Creating CNI manager for ""
	I0717 19:43:34.016532 1092821 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:43:34.016542 1092821 start_flags.go:319] config:
	{Name:kubernetes-upgrade-852374 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-852374 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:43:34.016738 1092821 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:43:34.019408 1092821 out.go:177] * Starting control plane node kubernetes-upgrade-852374 in cluster kubernetes-upgrade-852374
	I0717 19:43:34.022110 1092821 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:43:34.022193 1092821 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 19:43:34.022208 1092821 cache.go:57] Caching tarball of preloaded images
	I0717 19:43:34.022315 1092821 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:43:34.022330 1092821 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:43:34.022512 1092821 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kubernetes-upgrade-852374/config.json ...
	I0717 19:43:34.022792 1092821 start.go:365] acquiring machines lock for kubernetes-upgrade-852374: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:43:34.022857 1092821 start.go:369] acquired machines lock for "kubernetes-upgrade-852374" in 37.825µs
	I0717 19:43:34.022979 1092821 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:43:34.023002 1092821 fix.go:54] fixHost starting: 
	I0717 19:43:34.023505 1092821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:43:34.023558 1092821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:43:34.039422 1092821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0717 19:43:34.039975 1092821 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:43:34.040640 1092821 main.go:141] libmachine: Using API Version  1
	I0717 19:43:34.040668 1092821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:43:34.041046 1092821 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:43:34.041307 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:34.041474 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetState
	I0717 19:43:34.043498 1092821 fix.go:102] recreateIfNeeded on kubernetes-upgrade-852374: state=Stopped err=<nil>
	I0717 19:43:34.043538 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	W0717 19:43:34.043721 1092821 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:43:34.046452 1092821 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-852374" ...
	I0717 19:43:32.472654 1092475 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3 aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d 9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14 c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652 7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832 49baf58ec15908cae47539fcabe15ec5881d91984f515fc8893d1e5ba4fa945a e27349b47eb29995d13e036904ad2489fbdd0158ab94dd5af41f5b54a54b23b4 3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab dd136db1eb6034e459ff67b5521d6d3b4cdfed3e633f6c912bd853160d4d89af 303dedd41efda4df68fc27ba33a296176fe13089eccec1d1c7c7e0acba2026f4 4c073da8a04365a46710029851ee407893c131ec9a0206f3896ff2e8d7e32023 5e89c055cbdef19b4c97a2491a9b4f8f0543b60a2ec2615f5dceb70f936d7ab9: (5.85764714s)
	W0717 19:43:32.472751 1092475 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3 aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d 9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14 c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652 7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832 49baf58ec15908cae47539fcabe15ec5881d91984f515fc8893d1e5ba4fa945a e27349b47eb29995d13e036904ad2489fbdd0158ab94dd5af41f5b54a54b23b4 3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab dd136db1eb6034e459ff67b5521d6d3b4cdfed3e633f6c912bd853160d4d89af 303dedd41efda4df68fc27ba33a296176fe13089eccec1d1c7c7e0acba2026f4 4c073da8a04365a46710029851ee407893c131ec9a0206f3896ff2e8d7e32023 5e89c055cbdef19b4c97a2491a9b4f8f0543b60a2ec2615f5dceb70f936d7ab9: Proce
ss exited with status 1
	stdout:
	b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3
	aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d
	9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14
	c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652
	7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c
	bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832
	
	stderr:
	time="2023-07-17T19:43:32Z" level=fatal msg="stopping the container \"49baf58ec15908cae47539fcabe15ec5881d91984f515fc8893d1e5ba4fa945a\": rpc error: code = NotFound desc = could not find container \"49baf58ec15908cae47539fcabe15ec5881d91984f515fc8893d1e5ba4fa945a\": container with ID starting with 49baf58ec15908cae47539fcabe15ec5881d91984f515fc8893d1e5ba4fa945a not found: ID does not exist"
	I0717 19:43:32.472817 1092475 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:43:32.518774 1092475 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:43:32.531325 1092475 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 17 19:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jul 17 19:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 17 19:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jul 17 19:42 /etc/kubernetes/scheduler.conf
	
	I0717 19:43:32.531416 1092475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:43:32.544424 1092475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:43:32.557755 1092475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:43:32.570144 1092475 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:43:32.570297 1092475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:43:32.580106 1092475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:43:32.589431 1092475 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:43:32.589513 1092475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:43:32.599445 1092475 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:43:32.609170 1092475 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:43:32.609209 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:43:32.681988 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:43:33.858988 1092475 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.176961226s)
	I0717 19:43:33.859017 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:43:34.156902 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:43:34.278216 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:43:34.573068 1092475 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:43:34.573159 1092475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:43:35.138818 1092475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:43:34.048386 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .Start
	I0717 19:43:34.048775 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Ensuring networks are active...
	I0717 19:43:34.049942 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Ensuring network default is active
	I0717 19:43:34.050471 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Ensuring network mk-kubernetes-upgrade-852374 is active
	I0717 19:43:34.050867 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Getting domain xml...
	I0717 19:43:34.051675 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Creating domain...
	I0717 19:43:35.531075 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Waiting to get IP...
	I0717 19:43:35.532528 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:35.533091 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:35.533381 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:35.533250 1092866 retry.go:31] will retry after 235.204056ms: waiting for machine to come up
	I0717 19:43:35.769906 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:35.770401 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:35.770436 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:35.770363 1092866 retry.go:31] will retry after 354.76412ms: waiting for machine to come up
	I0717 19:43:36.127320 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:36.127912 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:36.128014 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:36.127973 1092866 retry.go:31] will retry after 442.510798ms: waiting for machine to come up
	I0717 19:43:36.572712 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:36.573501 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:36.573553 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:36.573408 1092866 retry.go:31] will retry after 578.01041ms: waiting for machine to come up
	I0717 19:43:37.153768 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:37.154597 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:37.154789 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:37.154697 1092866 retry.go:31] will retry after 661.501272ms: waiting for machine to come up
	I0717 19:43:37.817547 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:37.818326 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:37.818356 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:37.818260 1092866 retry.go:31] will retry after 890.152289ms: waiting for machine to come up
	I0717 19:43:38.710117 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:38.710601 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:38.710633 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:38.710499 1092866 retry.go:31] will retry after 1.066533307s: waiting for machine to come up
	I0717 19:43:35.638722 1092475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:43:36.138806 1092475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:43:36.638971 1092475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:43:36.679553 1092475 api_server.go:72] duration metric: took 2.106485099s to wait for apiserver process to appear ...
	I0717 19:43:36.679586 1092475 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:43:36.679610 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:36.680344 1092475 api_server.go:269] stopped: https://192.168.61.161:8443/healthz: Get "https://192.168.61.161:8443/healthz": dial tcp 192.168.61.161:8443: connect: connection refused
	I0717 19:43:37.180936 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:39.778814 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:39.779311 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:39.779342 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:39.779272 1092866 retry.go:31] will retry after 925.553521ms: waiting for machine to come up
	I0717 19:43:40.706910 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:40.707436 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:40.707457 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:40.707384 1092866 retry.go:31] will retry after 1.463072592s: waiting for machine to come up
	I0717 19:43:42.172003 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:42.172463 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:42.172493 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:42.172376 1092866 retry.go:31] will retry after 1.893228748s: waiting for machine to come up
	I0717 19:43:42.181609 1092475 api_server.go:269] stopped: https://192.168.61.161:8443/healthz: Get "https://192.168.61.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 19:43:42.181662 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:42.195579 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:43:42.195627 1092475 api_server.go:103] status: https://192.168.61.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:43:42.195643 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:42.206747 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:43:42.206787 1092475 api_server.go:103] status: https://192.168.61.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:43:42.681383 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:42.690794 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:43:42.690839 1092475 api_server.go:103] status: https://192.168.61.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:43:43.181401 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:43.434093 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:43:43.434148 1092475 api_server.go:103] status: https://192.168.61.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:43:43.680501 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:43.708077 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:43:43.708119 1092475 api_server.go:103] status: https://192.168.61.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:43:44.180578 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:44.189711 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:43:44.189756 1092475 api_server.go:103] status: https://192.168.61.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:43:44.680989 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:43:44.694918 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 200:
	ok
	I0717 19:43:44.720512 1092475 api_server.go:141] control plane version: v1.27.3
	I0717 19:43:44.720608 1092475 api_server.go:131] duration metric: took 8.041013212s to wait for apiserver health ...
	I0717 19:43:44.720631 1092475 cni.go:84] Creating CNI manager for ""
	I0717 19:43:44.720655 1092475 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:43:44.723277 1092475 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:43:44.725521 1092475 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:43:44.739760 1092475 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:43:44.768419 1092475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:43:44.784903 1092475 system_pods.go:59] 6 kube-system pods found
	I0717 19:43:44.784964 1092475 system_pods.go:61] "coredns-5d78c9869d-wqtgn" [751e7e2a-ac16-4ed3-a2a2-525707d4d84d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:43:44.784984 1092475 system_pods.go:61] "etcd-pause-882959" [78ec428a-348c-4a75-8ec7-da945774031b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:43:44.784995 1092475 system_pods.go:61] "kube-apiserver-pause-882959" [9d878a73-d7a9-41d1-82e3-f0f10b1294b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:43:44.785006 1092475 system_pods.go:61] "kube-controller-manager-pause-882959" [7e8f2b84-b012-4c12-9f0f-b09e5a5b2a41] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:43:44.785065 1092475 system_pods.go:61] "kube-proxy-zfl75" [e585a501-1534-4a6d-8c94-fbcb8e24cad2] Running
	I0717 19:43:44.785081 1092475 system_pods.go:61] "kube-scheduler-pause-882959" [ed96358e-3d5e-4038-bd60-2ce64193f430] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:43:44.785089 1092475 system_pods.go:74] duration metric: took 16.639092ms to wait for pod list to return data ...
	I0717 19:43:44.785099 1092475 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:43:44.795837 1092475 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:43:44.795942 1092475 node_conditions.go:123] node cpu capacity is 2
	I0717 19:43:44.795990 1092475 node_conditions.go:105] duration metric: took 10.859648ms to run NodePressure ...
	I0717 19:43:44.796031 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:43:45.209627 1092475 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:43:45.221366 1092475 kubeadm.go:787] kubelet initialised
	I0717 19:43:45.221459 1092475 kubeadm.go:788] duration metric: took 11.735225ms waiting for restarted kubelet to initialise ...
	I0717 19:43:45.221482 1092475 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:43:45.243781 1092475 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:44.067485 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:44.068259 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:44.068300 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:44.068185 1092866 retry.go:31] will retry after 2.487027797s: waiting for machine to come up
	I0717 19:43:46.558182 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:46.558684 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:46.558708 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:46.558627 1092866 retry.go:31] will retry after 2.657681621s: waiting for machine to come up
	I0717 19:43:47.282743 1092475 pod_ready.go:102] pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace has status "Ready":"False"
	I0717 19:43:47.782819 1092475 pod_ready.go:92] pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:47.782848 1092475 pod_ready.go:81] duration metric: took 2.538958892s waiting for pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:47.782859 1092475 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:49.805264 1092475 pod_ready.go:102] pod "etcd-pause-882959" in "kube-system" namespace has status "Ready":"False"
	I0717 19:43:49.218159 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:49.218801 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:49.218834 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:49.218725 1092866 retry.go:31] will retry after 3.961484205s: waiting for machine to come up
	I0717 19:43:53.182878 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:53.183452 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | unable to find current IP address of domain kubernetes-upgrade-852374 in network mk-kubernetes-upgrade-852374
	I0717 19:43:53.183484 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | I0717 19:43:53.183409 1092866 retry.go:31] will retry after 3.82827494s: waiting for machine to come up
	I0717 19:43:52.299490 1092475 pod_ready.go:102] pod "etcd-pause-882959" in "kube-system" namespace has status "Ready":"False"
	I0717 19:43:54.301113 1092475 pod_ready.go:102] pod "etcd-pause-882959" in "kube-system" namespace has status "Ready":"False"
	I0717 19:43:56.516618 1092475 pod_ready.go:102] pod "etcd-pause-882959" in "kube-system" namespace has status "Ready":"False"
	I0717 19:43:57.305184 1092475 pod_ready.go:92] pod "etcd-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:57.305220 1092475 pod_ready.go:81] duration metric: took 9.52235189s waiting for pod "etcd-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.305237 1092475 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.312124 1092475 pod_ready.go:92] pod "kube-apiserver-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:57.312159 1092475 pod_ready.go:81] duration metric: took 6.91274ms waiting for pod "kube-apiserver-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.312175 1092475 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.318738 1092475 pod_ready.go:92] pod "kube-controller-manager-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:57.318762 1092475 pod_ready.go:81] duration metric: took 6.578623ms waiting for pod "kube-controller-manager-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.318772 1092475 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zfl75" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.325497 1092475 pod_ready.go:92] pod "kube-proxy-zfl75" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:57.325519 1092475 pod_ready.go:81] duration metric: took 6.741343ms waiting for pod "kube-proxy-zfl75" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.325530 1092475 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.333049 1092475 pod_ready.go:92] pod "kube-scheduler-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:57.333076 1092475 pod_ready.go:81] duration metric: took 7.539683ms waiting for pod "kube-scheduler-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:57.333086 1092475 pod_ready.go:38] duration metric: took 12.111584234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:43:57.333114 1092475 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:43:57.355608 1092475 ops.go:34] apiserver oom_adj: -16
	I0717 19:43:57.355640 1092475 kubeadm.go:640] restartCluster took 40.835873846s
	I0717 19:43:57.355654 1092475 kubeadm.go:406] StartCluster complete in 40.930456599s
	I0717 19:43:57.355710 1092475 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:43:57.355817 1092475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:43:57.356572 1092475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:43:57.356890 1092475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:43:57.357056 1092475 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:43:57.361136 1092475 out.go:177] * Enabled addons: 
	I0717 19:43:57.357292 1092475 config.go:182] Loaded profile config "pause-882959": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:43:57.357608 1092475 kapi.go:59] client config for pause-882959: &rest.Config{Host:"https://192.168.61.161:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/pause-882959/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/pause-882959/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProto
s:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:43:57.363218 1092475 addons.go:502] enable addons completed in 6.157731ms: enabled=[]
	I0717 19:43:57.365667 1092475 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-882959" context rescaled to 1 replicas
	I0717 19:43:57.365719 1092475 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.161 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:43:57.368268 1092475 out.go:177] * Verifying Kubernetes components...
	I0717 19:43:57.014881 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.015556 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Found IP for machine: 192.168.72.32
	I0717 19:43:57.015588 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Reserving static IP address...
	I0717 19:43:57.015603 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has current primary IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.016189 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-852374", mac: "52:54:00:c4:25:2e", ip: "192.168.72.32"} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.016246 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Reserved static IP address: 192.168.72.32
	I0717 19:43:57.016265 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | skip adding static IP to network mk-kubernetes-upgrade-852374 - found existing host DHCP lease matching {name: "kubernetes-upgrade-852374", mac: "52:54:00:c4:25:2e", ip: "192.168.72.32"}
	I0717 19:43:57.016286 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | Getting to WaitForSSH function...
	I0717 19:43:57.016305 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Waiting for SSH to be available...
	I0717 19:43:57.018474 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.018892 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.018926 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.019213 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | Using SSH client type: external
	I0717 19:43:57.019250 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kubernetes-upgrade-852374/id_rsa (-rw-------)
	I0717 19:43:57.019291 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kubernetes-upgrade-852374/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:43:57.019311 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | About to run SSH command:
	I0717 19:43:57.019330 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | exit 0
	I0717 19:43:57.110302 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | SSH cmd err, output: <nil>: 
	I0717 19:43:57.110798 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetConfigRaw
	I0717 19:43:57.111651 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetIP
	I0717 19:43:57.114415 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.114778 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.114819 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.115001 1092821 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kubernetes-upgrade-852374/config.json ...
	I0717 19:43:57.115233 1092821 machine.go:88] provisioning docker machine ...
	I0717 19:43:57.115262 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:57.115533 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetMachineName
	I0717 19:43:57.115732 1092821 buildroot.go:166] provisioning hostname "kubernetes-upgrade-852374"
	I0717 19:43:57.115765 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetMachineName
	I0717 19:43:57.115978 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:57.118091 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.118407 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.118442 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.118590 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:57.118822 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.119041 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.119216 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:57.119383 1092821 main.go:141] libmachine: Using SSH client type: native
	I0717 19:43:57.119871 1092821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0717 19:43:57.119886 1092821 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-852374 && echo "kubernetes-upgrade-852374" | sudo tee /etc/hostname
	I0717 19:43:57.244123 1092821 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-852374
	
	I0717 19:43:57.244158 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:57.247418 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.247822 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.247864 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.248047 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:57.248272 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.248477 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.248603 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:57.248824 1092821 main.go:141] libmachine: Using SSH client type: native
	I0717 19:43:57.249294 1092821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0717 19:43:57.249318 1092821 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-852374' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-852374/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-852374' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:43:57.374003 1092821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:43:57.374038 1092821 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:43:57.374065 1092821 buildroot.go:174] setting up certificates
	I0717 19:43:57.374077 1092821 provision.go:83] configureAuth start
	I0717 19:43:57.374089 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetMachineName
	I0717 19:43:57.374447 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetIP
	I0717 19:43:57.377595 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.378152 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.378188 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.378673 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:57.381688 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.382171 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.382207 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.382365 1092821 provision.go:138] copyHostCerts
	I0717 19:43:57.382438 1092821 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:43:57.382452 1092821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:43:57.382554 1092821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:43:57.382673 1092821 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:43:57.382687 1092821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:43:57.382722 1092821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:43:57.382797 1092821 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:43:57.382809 1092821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:43:57.382842 1092821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:43:57.382989 1092821 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-852374 san=[192.168.72.32 192.168.72.32 localhost 127.0.0.1 minikube kubernetes-upgrade-852374]
	I0717 19:43:57.710036 1092821 provision.go:172] copyRemoteCerts
	I0717 19:43:57.710119 1092821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:43:57.710158 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:57.712981 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.713367 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.713402 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.713666 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:57.713942 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.714256 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:57.714501 1092821 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kubernetes-upgrade-852374/id_rsa Username:docker}
	I0717 19:43:57.800083 1092821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 19:43:57.827867 1092821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:43:57.854407 1092821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:43:57.879563 1092821 provision.go:86] duration metric: configureAuth took 505.471298ms
	I0717 19:43:57.879595 1092821 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:43:57.879775 1092821 config.go:182] Loaded profile config "kubernetes-upgrade-852374": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:43:57.879871 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:57.882921 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.883352 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:57.883390 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:57.883544 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:57.883713 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.883918 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:57.884071 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:57.884257 1092821 main.go:141] libmachine: Using SSH client type: native
	I0717 19:43:57.884861 1092821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0717 19:43:57.884888 1092821 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:43:58.212045 1092821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:43:58.212082 1092821 machine.go:91] provisioned docker machine in 1.096832354s
	I0717 19:43:58.212097 1092821 start.go:300] post-start starting for "kubernetes-upgrade-852374" (driver="kvm2")
	I0717 19:43:58.212110 1092821 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:43:58.212136 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:58.212578 1092821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:43:58.212618 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:58.215700 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.216169 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:58.216207 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.216394 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:58.216602 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:58.216773 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:58.216957 1092821 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kubernetes-upgrade-852374/id_rsa Username:docker}
	I0717 19:43:58.305109 1092821 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:43:58.310287 1092821 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:43:58.310324 1092821 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:43:58.310488 1092821 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:43:58.310652 1092821 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:43:58.310780 1092821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:43:58.320383 1092821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:43:58.347461 1092821 start.go:303] post-start completed in 135.340802ms
	I0717 19:43:58.347499 1092821 fix.go:56] fixHost completed within 24.324501894s
	I0717 19:43:58.347529 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:58.351205 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.351703 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:58.351745 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.351951 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:58.352169 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:58.352422 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:58.352634 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:58.352863 1092821 main.go:141] libmachine: Using SSH client type: native
	I0717 19:43:58.353328 1092821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I0717 19:43:58.353346 1092821 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:43:58.466610 1092821 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623038.406717258
	
	I0717 19:43:58.466633 1092821 fix.go:206] guest clock: 1689623038.406717258
	I0717 19:43:58.466643 1092821 fix.go:219] Guest: 2023-07-17 19:43:58.406717258 +0000 UTC Remote: 2023-07-17 19:43:58.347503617 +0000 UTC m=+24.512602427 (delta=59.213641ms)
	I0717 19:43:58.466670 1092821 fix.go:190] guest clock delta is within tolerance: 59.213641ms
	I0717 19:43:58.466677 1092821 start.go:83] releasing machines lock for "kubernetes-upgrade-852374", held for 24.44372546s
	I0717 19:43:58.466704 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:58.467028 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetIP
	I0717 19:43:58.470079 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.470439 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:58.470478 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.470663 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:58.471275 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:58.471472 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .DriverName
	I0717 19:43:58.471549 1092821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:43:58.471600 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:58.471712 1092821 ssh_runner.go:195] Run: cat /version.json
	I0717 19:43:58.471737 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHHostname
	I0717 19:43:58.474326 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.474475 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.474727 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:58.474775 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.474831 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:43:58.474860 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:43:58.474875 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:58.475090 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:58.475095 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHPort
	I0717 19:43:58.475290 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHKeyPath
	I0717 19:43:58.475307 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:58.475438 1092821 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kubernetes-upgrade-852374/id_rsa Username:docker}
	I0717 19:43:58.475506 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetSSHUsername
	I0717 19:43:58.475610 1092821 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kubernetes-upgrade-852374/id_rsa Username:docker}
	W0717 19:43:58.555637 1092821 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:43:58.555730 1092821 ssh_runner.go:195] Run: systemctl --version
	I0717 19:43:58.585035 1092821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:43:58.739160 1092821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:43:58.747084 1092821 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:43:58.747192 1092821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:43:58.763495 1092821 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:43:58.763528 1092821 start.go:469] detecting cgroup driver to use...
	I0717 19:43:58.763598 1092821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:43:58.778393 1092821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:43:58.792251 1092821 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:43:58.792318 1092821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:43:58.805464 1092821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:43:58.819247 1092821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:43:58.941594 1092821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:43:59.086847 1092821 docker.go:212] disabling docker service ...
	I0717 19:43:59.086939 1092821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:43:59.103870 1092821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:43:59.118485 1092821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:43:59.253200 1092821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:43:59.391215 1092821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:43:59.404196 1092821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:43:59.425769 1092821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:43:59.425824 1092821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:43:59.436690 1092821 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:43:59.436775 1092821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:43:59.448934 1092821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:43:59.460118 1092821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:43:59.470731 1092821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:43:59.481661 1092821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:43:59.491714 1092821 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:43:59.491788 1092821 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:43:59.507269 1092821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:43:59.519754 1092821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:43:59.648445 1092821 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:43:59.867784 1092821 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:43:59.867866 1092821 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:43:59.875505 1092821 start.go:537] Will wait 60s for crictl version
	I0717 19:43:59.875580 1092821 ssh_runner.go:195] Run: which crictl
	I0717 19:43:59.880545 1092821 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:43:59.916914 1092821 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:43:59.917011 1092821 ssh_runner.go:195] Run: crio --version
	I0717 19:43:59.975577 1092821 ssh_runner.go:195] Run: crio --version
	I0717 19:44:00.028632 1092821 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:43:57.370076 1092475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:43:57.467370 1092475 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:43:57.467369 1092475 node_ready.go:35] waiting up to 6m0s for node "pause-882959" to be "Ready" ...
	I0717 19:43:57.496510 1092475 node_ready.go:49] node "pause-882959" has status "Ready":"True"
	I0717 19:43:57.496545 1092475 node_ready.go:38] duration metric: took 29.138971ms waiting for node "pause-882959" to be "Ready" ...
	I0717 19:43:57.496557 1092475 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:43:57.698827 1092475 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:58.095689 1092475 pod_ready.go:92] pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:58.095719 1092475 pod_ready.go:81] duration metric: took 396.855147ms waiting for pod "coredns-5d78c9869d-wqtgn" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:58.095730 1092475 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:58.496481 1092475 pod_ready.go:92] pod "etcd-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:58.496511 1092475 pod_ready.go:81] duration metric: took 400.775096ms waiting for pod "etcd-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:58.496526 1092475 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:58.897434 1092475 pod_ready.go:92] pod "kube-apiserver-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:58.897459 1092475 pod_ready.go:81] duration metric: took 400.926711ms waiting for pod "kube-apiserver-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:58.897472 1092475 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:59.296907 1092475 pod_ready.go:92] pod "kube-controller-manager-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:59.296946 1092475 pod_ready.go:81] duration metric: took 399.465164ms waiting for pod "kube-controller-manager-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:59.296962 1092475 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zfl75" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:59.696699 1092475 pod_ready.go:92] pod "kube-proxy-zfl75" in "kube-system" namespace has status "Ready":"True"
	I0717 19:43:59.696727 1092475 pod_ready.go:81] duration metric: took 399.756797ms waiting for pod "kube-proxy-zfl75" in "kube-system" namespace to be "Ready" ...
	I0717 19:43:59.696736 1092475 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:44:00.096858 1092475 pod_ready.go:92] pod "kube-scheduler-pause-882959" in "kube-system" namespace has status "Ready":"True"
	I0717 19:44:00.096886 1092475 pod_ready.go:81] duration metric: took 400.143388ms waiting for pod "kube-scheduler-pause-882959" in "kube-system" namespace to be "Ready" ...
	I0717 19:44:00.096895 1092475 pod_ready.go:38] duration metric: took 2.600327122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:44:00.096912 1092475 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:44:00.096954 1092475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:44:00.117678 1092475 api_server.go:72] duration metric: took 2.751917265s to wait for apiserver process to appear ...
	I0717 19:44:00.117713 1092475 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:44:00.117736 1092475 api_server.go:253] Checking apiserver healthz at https://192.168.61.161:8443/healthz ...
	I0717 19:44:00.127038 1092475 api_server.go:279] https://192.168.61.161:8443/healthz returned 200:
	ok
	I0717 19:44:00.128710 1092475 api_server.go:141] control plane version: v1.27.3
	I0717 19:44:00.128745 1092475 api_server.go:131] duration metric: took 11.024265ms to wait for apiserver health ...
	I0717 19:44:00.128757 1092475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:44:00.301327 1092475 system_pods.go:59] 6 kube-system pods found
	I0717 19:44:00.301356 1092475 system_pods.go:61] "coredns-5d78c9869d-wqtgn" [751e7e2a-ac16-4ed3-a2a2-525707d4d84d] Running
	I0717 19:44:00.301361 1092475 system_pods.go:61] "etcd-pause-882959" [78ec428a-348c-4a75-8ec7-da945774031b] Running
	I0717 19:44:00.301365 1092475 system_pods.go:61] "kube-apiserver-pause-882959" [9d878a73-d7a9-41d1-82e3-f0f10b1294b6] Running
	I0717 19:44:00.301370 1092475 system_pods.go:61] "kube-controller-manager-pause-882959" [7e8f2b84-b012-4c12-9f0f-b09e5a5b2a41] Running
	I0717 19:44:00.301374 1092475 system_pods.go:61] "kube-proxy-zfl75" [e585a501-1534-4a6d-8c94-fbcb8e24cad2] Running
	I0717 19:44:00.301378 1092475 system_pods.go:61] "kube-scheduler-pause-882959" [ed96358e-3d5e-4038-bd60-2ce64193f430] Running
	I0717 19:44:00.301383 1092475 system_pods.go:74] duration metric: took 172.619982ms to wait for pod list to return data ...
	I0717 19:44:00.301392 1092475 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:44:00.498216 1092475 default_sa.go:45] found service account: "default"
	I0717 19:44:00.498252 1092475 default_sa.go:55] duration metric: took 196.854112ms for default service account to be created ...
	I0717 19:44:00.498266 1092475 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:44:00.701540 1092475 system_pods.go:86] 6 kube-system pods found
	I0717 19:44:00.701602 1092475 system_pods.go:89] "coredns-5d78c9869d-wqtgn" [751e7e2a-ac16-4ed3-a2a2-525707d4d84d] Running
	I0717 19:44:00.701610 1092475 system_pods.go:89] "etcd-pause-882959" [78ec428a-348c-4a75-8ec7-da945774031b] Running
	I0717 19:44:00.701615 1092475 system_pods.go:89] "kube-apiserver-pause-882959" [9d878a73-d7a9-41d1-82e3-f0f10b1294b6] Running
	I0717 19:44:00.701621 1092475 system_pods.go:89] "kube-controller-manager-pause-882959" [7e8f2b84-b012-4c12-9f0f-b09e5a5b2a41] Running
	I0717 19:44:00.701626 1092475 system_pods.go:89] "kube-proxy-zfl75" [e585a501-1534-4a6d-8c94-fbcb8e24cad2] Running
	I0717 19:44:00.701631 1092475 system_pods.go:89] "kube-scheduler-pause-882959" [ed96358e-3d5e-4038-bd60-2ce64193f430] Running
	I0717 19:44:00.701640 1092475 system_pods.go:126] duration metric: took 203.366821ms to wait for k8s-apps to be running ...
	I0717 19:44:00.701647 1092475 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:44:00.701700 1092475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:44:00.727411 1092475 system_svc.go:56] duration metric: took 25.747782ms WaitForService to wait for kubelet.
	I0717 19:44:00.727449 1092475 kubeadm.go:581] duration metric: took 3.361697217s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 19:44:00.727474 1092475 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:44:00.899324 1092475 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:44:00.899364 1092475 node_conditions.go:123] node cpu capacity is 2
	I0717 19:44:00.899382 1092475 node_conditions.go:105] duration metric: took 171.900278ms to run NodePressure ...
	I0717 19:44:00.899398 1092475 start.go:228] waiting for startup goroutines ...
	I0717 19:44:00.899407 1092475 start.go:233] waiting for cluster config update ...
	I0717 19:44:00.899423 1092475 start.go:242] writing updated cluster config ...
	I0717 19:44:00.899860 1092475 ssh_runner.go:195] Run: rm -f paused
	I0717 19:44:00.984206 1092475 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 19:44:00.987356 1092475 out.go:177] * Done! kubectl is now configured to use "pause-882959" cluster and "default" namespace by default
	I0717 19:44:00.030543 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) Calling .GetIP
	I0717 19:44:00.033583 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:44:00.034038 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:25:2e", ip: ""} in network mk-kubernetes-upgrade-852374: {Iface:virbr4 ExpiryTime:2023-07-17 20:42:37 +0000 UTC Type:0 Mac:52:54:00:c4:25:2e Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:kubernetes-upgrade-852374 Clientid:01:52:54:00:c4:25:2e}
	I0717 19:44:00.034081 1092821 main.go:141] libmachine: (kubernetes-upgrade-852374) DBG | domain kubernetes-upgrade-852374 has defined IP address 192.168.72.32 and MAC address 52:54:00:c4:25:2e in network mk-kubernetes-upgrade-852374
	I0717 19:44:00.034320 1092821 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:44:00.039115 1092821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:44:00.052697 1092821 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:44:00.052798 1092821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:44:00.095966 1092821 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:44:00.096064 1092821 ssh_runner.go:195] Run: which lz4
	I0717 19:44:00.101015 1092821 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:44:00.106175 1092821 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:44:00.106223 1092821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 19:44:02.006679 1092821 crio.go:444] Took 1.905704 seconds to copy over tarball
	I0717 19:44:02.006744 1092821 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 19:42:04 UTC, ends at Mon 2023-07-17 19:44:06 UTC. --
	Jul 17 19:44:04 pause-882959 crio[2539]: time="2023-07-17 19:44:04.805958224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2871a59b-a6e1-4609-9bae-388b4a0c4c4d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:04 pause-882959 crio[2539]: time="2023-07-17 19:44:04.806551117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2871a59b-a6e1-4609-9bae-388b4a0c4c4d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:04 pause-882959 crio[2539]: time="2023-07-17 19:44:04.867945485Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4f528f13-1668-4b57-b540-cbcdd9b3fde3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:04 pause-882959 crio[2539]: time="2023-07-17 19:44:04.868208162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4f528f13-1668-4b57-b540-cbcdd9b3fde3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:04 pause-882959 crio[2539]: time="2023-07-17 19:44:04.868860815Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4f528f13-1668-4b57-b540-cbcdd9b3fde3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:04 pause-882959 crio[2539]: time="2023-07-17 19:44:04.940624087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=35451ed9-bea8-43d9-b61c-6d76459e8172 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:04 pause-882959 crio[2539]: time="2023-07-17 19:44:04.942134129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=35451ed9-bea8-43d9-b61c-6d76459e8172 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:04 pause-882959 crio[2539]: time="2023-07-17 19:44:04.943236293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=35451ed9-bea8-43d9-b61c-6d76459e8172 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.021338105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c435ff1c-2edb-4c90-9a78-04af6abac1bb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.021434654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c435ff1c-2edb-4c90-9a78-04af6abac1bb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.021881275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c435ff1c-2edb-4c90-9a78-04af6abac1bb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.083031695Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1bca5353-84f3-414b-b9d4-79048a804818 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.083333024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1bca5353-84f3-414b-b9d4-79048a804818 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.083695932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1bca5353-84f3-414b-b9d4-79048a804818 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.159943438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=615ec972-dcc1-4bc7-958e-4025c7260d3f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.160042033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=615ec972-dcc1-4bc7-958e-4025c7260d3f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.160703192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=615ec972-dcc1-4bc7-958e-4025c7260d3f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.227964688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b26e145a-ccfa-4ac9-84dd-7686e24530dc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.228201171Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b26e145a-ccfa-4ac9-84dd-7686e24530dc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.228590973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b26e145a-ccfa-4ac9-84dd-7686e24530dc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.680280024Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=ee7ce385-367c-42c1-b81c-bc278fa9ac3c name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.680624488Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-wqtgn,Uid:751e7e2a-ac16-4ed3-a2a2-525707d4d84d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689622997236322714,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T19:42:57.332358038Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&PodSandboxMetadata{Name:kube-proxy-zfl75,Uid:e585a501-1534-4a6d-8c94-fbcb8e24cad2,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1689622997223134948,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T19:42:57.239626388Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&PodSandboxMetadata{Name:etcd-pause-882959,Uid:630e68b7fc44a2b8708830cc8b87ff6d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689622997187713435,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
etcd.advertise-client-urls: https://192.168.61.161:2379,kubernetes.io/config.hash: 630e68b7fc44a2b8708830cc8b87ff6d,kubernetes.io/config.seen: 2023-07-17T19:42:42.435260652Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-882959,Uid:87f90ddf48f9fe6fe40a497facd9340e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689622997150151194,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.161:8443,kubernetes.io/config.hash: 87f90ddf48f9fe6fe40a497facd9340e,kubernetes.io/config.seen: 2023-07-17T19:42:42.435264865Z,kubernetes.io/config.source: file,},RuntimeH
andler:,},&PodSandbox{Id:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-882959,Uid:2ee4f8d565bd40bb276359f9a3316e30,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689622997121390059,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2ee4f8d565bd40bb276359f9a3316e30,kubernetes.io/config.seen: 2023-07-17T19:42:42.435266816Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-882959,Uid:146c3f581a9f3949700e695b352faa81,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689622997080413997,Labels:map[string]str
ing{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 146c3f581a9f3949700e695b352faa81,kubernetes.io/config.seen: 2023-07-17T19:42:42.435266110Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-9m5pz,Uid:decefa53-94e0-4dae-a713-5aa724208ceb,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1689622977674710281,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io
/config.seen: 2023-07-17T19:42:57.295246981Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=ee7ce385-367c-42c1-b81c-bc278fa9ac3c name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.681804332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=57d86890-27be-4230-9e16-e094afcaeada name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.681864002Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=57d86890-27be-4230-9e16-e094afcaeada name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:44:05 pause-882959 crio[2539]: time="2023-07-17 19:44:05.682341582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623023424584943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623023187023598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623015478720588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d
565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623015465560988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623015395941048,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623015405584109,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4293fa42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3,PodSandboxId:d4ad4d6790f9a14cd0a47e44d12886aba3fabd14ec31fd27b12e132ca35b9f34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_EXITED,CreatedAt:1689623006026593834,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f90ddf48f9fe6fe40a497facd9340e,},Annotations:map[string]string{io.kubernet
es.container.hash: 4293fa42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d,PodSandboxId:c2d840626c67986a478229adbe4a47e97620a19641bfe49d8186f52c75c4bed4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689623000841860137,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zfl75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e585a501-1534-4a6d-8c94-fbcb8e24cad2,},Annotations:map[string]string{io.kubernetes.container.hash: 402bb1e2,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14,PodSandboxId:578a3416f5d892ee2f6abdd060f83a2ca1357577a7ab0d8f90bec5dcfb7e1e12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689622999414703957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-wqtgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751e7e2a-ac16-4ed3-a2a2-525707d4d84d,},Annotations:map[string]string{io.kubernetes.container.hash: 7711e259,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652,PodSandboxId:8875318cf44000b9a01b84e602b04469f4cc33e9f4baef75fbab0dc0ec4b8ebe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_EXITED,CreatedAt:1689622998714728573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-882
959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee4f8d565bd40bb276359f9a3316e30,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c,PodSandboxId:23bf275989aa0cebbe17e18872e5cfbe841b9ccc1d61eba41efe18b799210bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689622998290377722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-882959,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 630e68b7fc44a2b8708830cc8b87ff6d,},Annotations:map[string]string{io.kubernetes.container.hash: d99c380c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832,PodSandboxId:04d6be588f83a25b865a30956b84e40e4d1a51fe0495e92d20824a92634136b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689622997752269802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-882959,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 146c3f581a9f3949700e695b352faa81,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab,PodSandboxId:dae5cce88bd5773450bb49d4763c330ff7c3db4535a422cb4720989af0c6c26a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1689622978792888320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9m5pz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: decefa53-94e0-4dae-a713-5aa724208ceb,},Ann
otations:map[string]string{io.kubernetes.container.hash: f76a51bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=57d86890-27be-4230-9e16-e094afcaeada name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	4651fcefee21d       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   23 seconds ago       Running             kube-proxy                2                   c2d840626c679
	2ddeef059aeac       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   23 seconds ago       Running             coredns                   2                   578a3416f5d89
	552f85e3417ed       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   31 seconds ago       Running             kube-scheduler            2                   8875318cf4400
	86f6c76b47d06       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   31 seconds ago       Running             kube-controller-manager   2                   04d6be588f83a
	3f5a361747df7       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   31 seconds ago       Running             kube-apiserver            3                   d4ad4d6790f9a
	65e86b0e324b8       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   31 seconds ago       Running             etcd                      2                   23bf275989aa0
	b1d1ae4fbdbc1       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   41 seconds ago       Exited              kube-apiserver            2                   d4ad4d6790f9a
	aea020bea67e6       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   46 seconds ago       Exited              kube-proxy                1                   c2d840626c679
	9e713b911a483       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   47 seconds ago       Exited              coredns                   1                   578a3416f5d89
	c720ae6c03731       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   48 seconds ago       Exited              kube-scheduler            1                   8875318cf4400
	7dd7d620012bd       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   48 seconds ago       Exited              etcd                      1                   23bf275989aa0
	bf4ef96d6cd5c       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   49 seconds ago       Exited              kube-controller-manager   1                   04d6be588f83a
	3cd806479e841       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   0                   dae5cce88bd57
	
	* 
	* ==> coredns [2ddeef059aeac9a52019532e6280b7ad7f6fa81d1425584c651d0ba1e49636e9] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41893 - 12192 "HINFO IN 5125562647933031373.447300645467008570. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010659901s
	
	* 
	* ==> coredns [3cd806479e841d0c7ce2d834a8662b906ce551fbe29c2d6a254748e6426d13ab] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:33241 - 31033 "HINFO IN 1596802302159776750.6556267873721546673. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00918775s
	
	* 
	* ==> coredns [9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34002 - 47690 "HINFO IN 6248114585339225492.101746283056818754. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009156952s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-882959
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-882959
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=pause-882959
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T19_42_42_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:42:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-882959
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 19:44:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 19:43:42 +0000   Mon, 17 Jul 2023 19:42:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 19:43:42 +0000   Mon, 17 Jul 2023 19:42:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 19:43:42 +0000   Mon, 17 Jul 2023 19:42:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 19:43:42 +0000   Mon, 17 Jul 2023 19:42:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.161
	  Hostname:    pause-882959
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 31e8a3cc94384d26a2c6226e22e3aa53
	  System UUID:                31e8a3cc-9438-4d26-a2c6-226e22e3aa53
	  Boot ID:                    20705618-4604-465f-9569-f9fd101ca5e3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-wqtgn                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     71s
	  kube-system                 etcd-pause-882959                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         86s
	  kube-system                 kube-apiserver-pause-882959             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-controller-manager-pause-882959    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-proxy-zfl75                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-scheduler-pause-882959             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 68s                kube-proxy       
	  Normal  Starting                 24s                kube-proxy       
	  Normal  NodeHasSufficientPID     96s (x7 over 96s)  kubelet          Node pause-882959 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    96s (x8 over 96s)  kubelet          Node pause-882959 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  96s (x8 over 96s)  kubelet          Node pause-882959 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                86s                kubelet          Node pause-882959 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  86s                kubelet          Node pause-882959 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s                kubelet          Node pause-882959 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s                kubelet          Node pause-882959 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           74s                node-controller  Node pause-882959 event: Registered Node pause-882959 in Controller
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s (x8 over 34s)  kubelet          Node pause-882959 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 34s)  kubelet          Node pause-882959 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x7 over 34s)  kubelet          Node pause-882959 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12s                node-controller  Node pause-882959 event: Registered Node pause-882959 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075816] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul17 19:42] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.662238] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142880] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.099625] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000093] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.183058] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.131327] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.159085] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.124017] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.321524] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[ +12.540847] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[ +10.870742] systemd-fstab-generator[1269]: Ignoring "noauto" for root device
	[Jul17 19:43] systemd-fstab-generator[2166]: Ignoring "noauto" for root device
	[  +0.186600] systemd-fstab-generator[2177]: Ignoring "noauto" for root device
	[  +0.243330] kauditd_printk_skb: 24 callbacks suppressed
	[  +0.389091] systemd-fstab-generator[2322]: Ignoring "noauto" for root device
	[  +0.376788] systemd-fstab-generator[2384]: Ignoring "noauto" for root device
	[  +0.566955] systemd-fstab-generator[2409]: Ignoring "noauto" for root device
	[  +3.780056] kauditd_printk_skb: 3 callbacks suppressed
	[ +17.041347] systemd-fstab-generator[3504]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [65e86b0e324b8ae4f3194fcab72f659490dd508368f347032b4c8ba84a4708d8] <==
	* {"level":"info","ts":"2023-07-17T19:43:43.403Z","caller":"traceutil/trace.go:171","msg":"trace[463496343] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"207.720558ms","start":"2023-07-17T19:43:43.195Z","end":"2023-07-17T19:43:43.403Z","steps":["trace[463496343] 'process raft request'  (duration: 138.054469ms)","trace[463496343] 'compare'  (duration: 68.052671ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T19:43:43.404Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.495645ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T19:43:43.405Z","caller":"traceutil/trace.go:171","msg":"trace[941357097] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:437; }","duration":"205.940851ms","start":"2023-07-17T19:43:43.199Z","end":"2023-07-17T19:43:43.405Z","steps":["trace[941357097] 'agreement among raft nodes before linearized reading'  (duration: 205.054485ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:43.406Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.748623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" ","response":"range_response_count:50 size:35108"}
	{"level":"info","ts":"2023-07-17T19:43:43.406Z","caller":"traceutil/trace.go:171","msg":"trace[731044362] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:50; response_revision:437; }","duration":"207.170319ms","start":"2023-07-17T19:43:43.199Z","end":"2023-07-17T19:43:43.406Z","steps":["trace[731044362] 'agreement among raft nodes before linearized reading'  (duration: 206.348356ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:43.410Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.034799ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T19:43:43.411Z","caller":"traceutil/trace.go:171","msg":"trace[2142756550] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:437; }","duration":"188.385589ms","start":"2023-07-17T19:43:43.222Z","end":"2023-07-17T19:43:43.411Z","steps":["trace[2142756550] 'agreement among raft nodes before linearized reading'  (duration: 187.917153ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:43.412Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.546458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5d78c9869d-9m5pz\" ","response":"range_response_count:1 size:4610"}
	{"level":"info","ts":"2023-07-17T19:43:43.412Z","caller":"traceutil/trace.go:171","msg":"trace[873570816] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5d78c9869d-9m5pz; range_end:; response_count:1; response_revision:437; }","duration":"209.043546ms","start":"2023-07-17T19:43:43.203Z","end":"2023-07-17T19:43:43.412Z","steps":["trace[873570816] 'agreement among raft nodes before linearized reading'  (duration: 208.392148ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:43.414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.692156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-cluster-critical\" ","response":"range_response_count:1 size:477"}
	{"level":"info","ts":"2023-07-17T19:43:43.417Z","caller":"traceutil/trace.go:171","msg":"trace[1530418752] range","detail":"{range_begin:/registry/priorityclasses/system-cluster-critical; range_end:; response_count:1; response_revision:437; }","duration":"217.729768ms","start":"2023-07-17T19:43:43.199Z","end":"2023-07-17T19:43:43.417Z","steps":["trace[1530418752] 'agreement among raft nodes before linearized reading'  (duration: 214.624685ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T19:43:51.571Z","caller":"traceutil/trace.go:171","msg":"trace[234472061] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"303.526964ms","start":"2023-07-17T19:43:51.267Z","end":"2023-07-17T19:43:51.571Z","steps":["trace[234472061] 'process raft request'  (duration: 303.349398ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:51.571Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.594151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-882959\" ","response":"range_response_count:1 size:5478"}
	{"level":"info","ts":"2023-07-17T19:43:51.571Z","caller":"traceutil/trace.go:171","msg":"trace[2019979269] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-882959; range_end:; response_count:1; response_revision:472; }","duration":"280.734286ms","start":"2023-07-17T19:43:51.290Z","end":"2023-07-17T19:43:51.571Z","steps":["trace[2019979269] 'agreement among raft nodes before linearized reading'  (duration: 280.493047ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:51.571Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:43:51.267Z","time spent":"304.052072ms","remote":"127.0.0.1:36360","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4376,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-882959\" mod_revision:419 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-882959\" value_size:4314 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-882959\" > >"}
	{"level":"info","ts":"2023-07-17T19:43:51.571Z","caller":"traceutil/trace.go:171","msg":"trace[2036788317] linearizableReadLoop","detail":"{readStateIndex:506; appliedIndex:506; }","duration":"280.439024ms","start":"2023-07-17T19:43:51.290Z","end":"2023-07-17T19:43:51.571Z","steps":["trace[2036788317] 'read index received'  (duration: 280.433314ms)","trace[2036788317] 'applied index is now lower than readState.Index'  (duration: 4.54µs)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T19:43:56.499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.782035ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3423731210247660832 > lease_revoke:<id:2f8389656048dbad>","response":"size:27"}
	{"level":"info","ts":"2023-07-17T19:43:56.499Z","caller":"traceutil/trace.go:171","msg":"trace[2033090678] linearizableReadLoop","detail":"{readStateIndex:520; appliedIndex:519; }","duration":"209.972906ms","start":"2023-07-17T19:43:56.289Z","end":"2023-07-17T19:43:56.499Z","steps":["trace[2033090678] 'read index received'  (duration: 44.973845ms)","trace[2033090678] 'applied index is now lower than readState.Index'  (duration: 164.99758ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T19:43:56.499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.11732ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-882959\" ","response":"range_response_count:1 size:5478"}
	{"level":"info","ts":"2023-07-17T19:43:56.499Z","caller":"traceutil/trace.go:171","msg":"trace[1119367753] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-882959; range_end:; response_count:1; response_revision:483; }","duration":"210.150054ms","start":"2023-07-17T19:43:56.289Z","end":"2023-07-17T19:43:56.499Z","steps":["trace[1119367753] 'agreement among raft nodes before linearized reading'  (duration: 210.061134ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:43:56.500Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.754398ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T19:43:56.500Z","caller":"traceutil/trace.go:171","msg":"trace[487503170] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:483; }","duration":"150.797973ms","start":"2023-07-17T19:43:56.349Z","end":"2023-07-17T19:43:56.500Z","steps":["trace[487503170] 'agreement among raft nodes before linearized reading'  (duration: 150.650569ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T19:43:56.889Z","caller":"traceutil/trace.go:171","msg":"trace[2002614845] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"373.231263ms","start":"2023-07-17T19:43:56.516Z","end":"2023-07-17T19:43:56.889Z","steps":["trace[2002614845] 'process raft request'  (duration: 342.163559ms)","trace[2002614845] 'compare'  (duration: 30.522504ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T19:43:56.889Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:43:56.516Z","time spent":"373.617112ms","remote":"127.0.0.1:36360","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5463,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-882959\" mod_revision:433 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-882959\" value_size:5411 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-882959\" > >"}
	{"level":"info","ts":"2023-07-17T19:43:57.072Z","caller":"traceutil/trace.go:171","msg":"trace[2051540128] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"168.293702ms","start":"2023-07-17T19:43:56.904Z","end":"2023-07-17T19:43:57.072Z","steps":["trace[2051540128] 'process raft request'  (duration: 84.611605ms)","trace[2051540128] 'compare'  (duration: 83.570465ms)"],"step_count":2}
	
	* 
	* ==> etcd [7dd7d620012bded34deeef2ec3386cb9036ac0ed8255e276646abb38d6ec371c] <==
	* {"level":"info","ts":"2023-07-17T19:43:20.144Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.161:2380"}
	{"level":"info","ts":"2023-07-17T19:43:20.144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 switched to configuration voters=(4712710697171431299)"}
	{"level":"info","ts":"2023-07-17T19:43:20.144Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"41c4ab09a330aec","local-member-id":"4166e968fa162f83","added-peer-id":"4166e968fa162f83","added-peer-peer-urls":["https://192.168.61.161:2380"]}
	{"level":"info","ts":"2023-07-17T19:43:20.144Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"41c4ab09a330aec","local-member-id":"4166e968fa162f83","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:43:20.144Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 received MsgPreVoteResp from 4166e968fa162f83 at term 2"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 became candidate at term 3"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 received MsgVoteResp from 4166e968fa162f83 at term 3"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4166e968fa162f83 became leader at term 3"}
	{"level":"info","ts":"2023-07-17T19:43:21.124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4166e968fa162f83 elected leader 4166e968fa162f83 at term 3"}
	{"level":"info","ts":"2023-07-17T19:43:21.126Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4166e968fa162f83","local-member-attributes":"{Name:pause-882959 ClientURLs:[https://192.168.61.161:2379]}","request-path":"/0/members/4166e968fa162f83/attributes","cluster-id":"41c4ab09a330aec","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T19:43:21.127Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T19:43:21.128Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T19:43:21.128Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.161:2379"}
	{"level":"info","ts":"2023-07-17T19:43:21.128Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T19:43:21.128Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T19:43:21.129Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T19:43:32.293Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-07-17T19:43:32.293Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-882959","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.161:2380"],"advertise-client-urls":["https://192.168.61.161:2379"]}
	{"level":"info","ts":"2023-07-17T19:43:32.296Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4166e968fa162f83","current-leader-member-id":"4166e968fa162f83"}
	{"level":"info","ts":"2023-07-17T19:43:32.301Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.61.161:2380"}
	{"level":"info","ts":"2023-07-17T19:43:32.302Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.61.161:2380"}
	{"level":"info","ts":"2023-07-17T19:43:32.302Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-882959","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.161:2380"],"advertise-client-urls":["https://192.168.61.161:2379"]}
	
	* 
	* ==> kernel <==
	*  19:44:08 up 2 min,  0 users,  load average: 1.45, 0.64, 0.25
	Linux pause-882959 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3f5a361747df73dae8da3eab5401ba47a5d862de911f553d695d924d854b3325] <==
	* I0717 19:43:42.059149       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0717 19:43:42.059248       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0717 19:43:42.159658       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0717 19:43:42.161964       1 aggregator.go:152] initial CRD sync complete...
	I0717 19:43:42.162149       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 19:43:42.162186       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 19:43:42.162215       1 cache.go:39] Caches are synced for autoregister controller
	I0717 19:43:42.162985       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0717 19:43:42.163622       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 19:43:42.164648       1 shared_informer.go:318] Caches are synced for configmaps
	I0717 19:43:42.164746       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0717 19:43:42.169688       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0717 19:43:42.186744       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0717 19:43:42.209259       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0717 19:43:42.209357       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0717 19:43:42.211330       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 19:43:42.633165       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 19:43:43.450551       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 19:43:44.973008       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 19:43:44.994714       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 19:43:45.076358       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 19:43:45.149341       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 19:43:45.174732       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 19:43:56.002516       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 19:43:56.044536       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [b1d1ae4fbdbc1163966bde2d686c11c1507edff36bf2532346f67a91f5ffccb3] <==
	* 
	* 
	* ==> kube-controller-manager [86f6c76b47d06a2c8510573c69e753d035faa7afc3ecc92113b1d5517c1e7aa2] <==
	* I0717 19:43:55.975169       1 shared_informer.go:318] Caches are synced for ephemeral
	I0717 19:43:55.981465       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0717 19:43:55.981613       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0717 19:43:55.983565       1 shared_informer.go:318] Caches are synced for node
	I0717 19:43:55.983866       1 range_allocator.go:174] "Sending events to api server"
	I0717 19:43:55.984144       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0717 19:43:55.984153       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0717 19:43:55.984160       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0717 19:43:55.989996       1 shared_informer.go:318] Caches are synced for GC
	I0717 19:43:55.999428       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0717 19:43:56.003933       1 shared_informer.go:318] Caches are synced for disruption
	I0717 19:43:56.020259       1 shared_informer.go:318] Caches are synced for taint
	I0717 19:43:56.020546       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0717 19:43:56.020719       1 shared_informer.go:318] Caches are synced for endpoint
	I0717 19:43:56.020810       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0717 19:43:56.020952       1 taint_manager.go:211] "Sending events to api server"
	I0717 19:43:56.021412       1 shared_informer.go:318] Caches are synced for stateful set
	I0717 19:43:56.021643       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-882959"
	I0717 19:43:56.021806       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0717 19:43:56.021899       1 event.go:307] "Event occurred" object="pause-882959" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-882959 event: Registered Node pause-882959 in Controller"
	I0717 19:43:56.022141       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 19:43:56.022265       1 shared_informer.go:318] Caches are synced for attach detach
	I0717 19:43:56.401199       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 19:43:56.417657       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 19:43:56.417737       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [bf4ef96d6cd5c498572f61037aad7de89db67f22ada0211a0eede41c89e02832] <==
	* I0717 19:43:19.844801       1 serving.go:348] Generated self-signed cert in-memory
	I0717 19:43:20.627380       1 controllermanager.go:187] "Starting" version="v1.27.3"
	I0717 19:43:20.627432       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:43:20.629202       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 19:43:20.629329       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 19:43:20.630342       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0717 19:43:20.630426       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0717 19:43:30.633419       1 controllermanager.go:233] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.61.161:8443/healthz\": dial tcp 192.168.61.161:8443: connect: connection refused"
	
	* 
	* ==> kube-proxy [4651fcefee21d8da426e4bfbeb78e89164404e4940824cdbc24ea66f1addf81c] <==
	* I0717 19:43:44.031240       1 node.go:141] Successfully retrieved node IP: 192.168.61.161
	I0717 19:43:44.031384       1 server_others.go:110] "Detected node IP" address="192.168.61.161"
	I0717 19:43:44.031461       1 server_others.go:554] "Using iptables proxy"
	I0717 19:43:44.204310       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 19:43:44.204566       1 server_others.go:192] "Using iptables Proxier"
	I0717 19:43:44.204797       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:43:44.205944       1 server.go:658] "Version info" version="v1.27.3"
	I0717 19:43:44.206376       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:43:44.207567       1 config.go:188] "Starting service config controller"
	I0717 19:43:44.207659       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 19:43:44.207760       1 config.go:97] "Starting endpoint slice config controller"
	I0717 19:43:44.207795       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 19:43:44.208540       1 config.go:315] "Starting node config controller"
	I0717 19:43:44.208594       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 19:43:44.308405       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 19:43:44.308434       1 shared_informer.go:318] Caches are synced for service config
	I0717 19:43:44.308763       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d] <==
	* E0717 19:43:21.105116       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-882959": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:22.211693       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-882959": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:24.217872       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-882959": dial tcp 192.168.61.161:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [552f85e3417ed736a1f2555de4433f52c389d54f83a05e9b551ad6e26f41f54b] <==
	* I0717 19:43:39.232436       1 serving.go:348] Generated self-signed cert in-memory
	W0717 19:43:42.100284       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 19:43:42.100495       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:43:42.100617       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 19:43:42.100647       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 19:43:42.183854       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 19:43:42.184044       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:43:42.200186       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:43:42.200329       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:43:42.203016       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 19:43:42.204329       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 19:43:42.306515       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [c720ae6c03731bdea255e8770e2f31e560717e6d78de0ed00b78066c89a21652] <==
	* E0717 19:43:29.144988       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:29.277716       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:29.277881       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:29.286775       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.61.161:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:29.286940       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.161:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:29.752228       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.61.161:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:29.752382       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.61.161:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:29.758424       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:29.758496       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:29.871633       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.61.161:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:29.871729       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.61.161:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:30.517960       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.61.161:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:30.518262       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.61.161:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:30.545764       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.61.161:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:30.545907       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.61.161:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:30.685961       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.61.161:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:30.686180       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.61.161:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:30.996542       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.61.161:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:30.996681       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.61.161:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:31.231840       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.61.161:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:31.231967       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.61.161:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	W0717 19:43:31.269050       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	E0717 19:43:31.269276       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.61.161:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	I0717 19:43:32.123137       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0717 19:43:32.123912       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 19:42:04 UTC, ends at Mon 2023-07-17 19:44:09 UTC. --
	Jul 17 19:43:35 pause-882959 kubelet[3510]: E0717 19:43:35.776046    3510 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-882959&limit=500&resourceVersion=0": dial tcp 192.168.61.161:8443: connect: connection refused
	Jul 17 19:43:35 pause-882959 kubelet[3510]: I0717 19:43:35.827568    3510 kubelet_node_status.go:70] "Attempting to register node" node="pause-882959"
	Jul 17 19:43:35 pause-882959 kubelet[3510]: E0717 19:43:35.828315    3510 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.161:8443: connect: connection refused" node="pause-882959"
	Jul 17 19:43:37 pause-882959 kubelet[3510]: I0717 19:43:37.430534    3510 kubelet_node_status.go:70] "Attempting to register node" node="pause-882959"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.234244    3510 kubelet_node_status.go:108] "Node was previously registered" node="pause-882959"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.234335    3510 kubelet_node_status.go:73] "Successfully registered node" node="pause-882959"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.237487    3510 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.238674    3510 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.289438    3510 apiserver.go:52] "Watching apiserver"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.294004    3510 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.294329    3510 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.294457    3510 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.307705    3510 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373299    3510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9txg\" (UniqueName: \"kubernetes.io/projected/751e7e2a-ac16-4ed3-a2a2-525707d4d84d-kube-api-access-p9txg\") pod \"coredns-5d78c9869d-wqtgn\" (UID: \"751e7e2a-ac16-4ed3-a2a2-525707d4d84d\") " pod="kube-system/coredns-5d78c9869d-wqtgn"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373401    3510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwjjr\" (UniqueName: \"kubernetes.io/projected/e585a501-1534-4a6d-8c94-fbcb8e24cad2-kube-api-access-xwjjr\") pod \"kube-proxy-zfl75\" (UID: \"e585a501-1534-4a6d-8c94-fbcb8e24cad2\") " pod="kube-system/kube-proxy-zfl75"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373441    3510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751e7e2a-ac16-4ed3-a2a2-525707d4d84d-config-volume\") pod \"coredns-5d78c9869d-wqtgn\" (UID: \"751e7e2a-ac16-4ed3-a2a2-525707d4d84d\") " pod="kube-system/coredns-5d78c9869d-wqtgn"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373483    3510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e585a501-1534-4a6d-8c94-fbcb8e24cad2-kube-proxy\") pod \"kube-proxy-zfl75\" (UID: \"e585a501-1534-4a6d-8c94-fbcb8e24cad2\") " pod="kube-system/kube-proxy-zfl75"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373558    3510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e585a501-1534-4a6d-8c94-fbcb8e24cad2-xtables-lock\") pod \"kube-proxy-zfl75\" (UID: \"e585a501-1534-4a6d-8c94-fbcb8e24cad2\") " pod="kube-system/kube-proxy-zfl75"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373595    3510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e585a501-1534-4a6d-8c94-fbcb8e24cad2-lib-modules\") pod \"kube-proxy-zfl75\" (UID: \"e585a501-1534-4a6d-8c94-fbcb8e24cad2\") " pod="kube-system/kube-proxy-zfl75"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.373610    3510 reconciler.go:41] "Reconciler: start to sync state"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.596014    3510 scope.go:115] "RemoveContainer" containerID="9e713b911a483c22e00dcb0423f7a498c8a1e2d0cb49f052b7f7ea017d524c14"
	Jul 17 19:43:42 pause-882959 kubelet[3510]: I0717 19:43:42.598803    3510 scope.go:115] "RemoveContainer" containerID="aea020bea67e6847143435bd85b774e1f7ae721b240ead0b53db406afbadd92d"
	Jul 17 19:43:44 pause-882959 kubelet[3510]: I0717 19:43:44.332766    3510 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=decefa53-94e0-4dae-a713-5aa724208ceb path="/var/lib/kubelet/pods/decefa53-94e0-4dae-a713-5aa724208ceb/volumes"
	Jul 17 19:43:45 pause-882959 kubelet[3510]: I0717 19:43:45.591405    3510 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jul 17 19:43:47 pause-882959 kubelet[3510]: I0717 19:43:47.593917    3510 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-882959 -n pause-882959
helpers_test.go:261: (dbg) Run:  kubectl --context pause-882959 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (69.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-149000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-149000 --alsologtostderr -v=3: exit status 82 (2m1.80198629s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-149000"  ...
	* Stopping node "old-k8s-version-149000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:50:29.047369 1099851 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:50:29.047642 1099851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:50:29.047658 1099851 out.go:309] Setting ErrFile to fd 2...
	I0717 19:50:29.047664 1099851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:50:29.047887 1099851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:50:29.048165 1099851 out.go:303] Setting JSON to false
	I0717 19:50:29.048270 1099851 mustload.go:65] Loading cluster: old-k8s-version-149000
	I0717 19:50:29.048595 1099851 config.go:182] Loaded profile config "old-k8s-version-149000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 19:50:29.048690 1099851 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/config.json ...
	I0717 19:50:29.048862 1099851 mustload.go:65] Loading cluster: old-k8s-version-149000
	I0717 19:50:29.048971 1099851 config.go:182] Loaded profile config "old-k8s-version-149000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 19:50:29.049003 1099851 stop.go:39] StopHost: old-k8s-version-149000
	I0717 19:50:29.049401 1099851 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:50:29.049478 1099851 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:50:29.065387 1099851 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35659
	I0717 19:50:29.066023 1099851 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:50:29.066882 1099851 main.go:141] libmachine: Using API Version  1
	I0717 19:50:29.066915 1099851 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:50:29.067366 1099851 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:50:29.070970 1099851 out.go:177] * Stopping node "old-k8s-version-149000"  ...
	I0717 19:50:29.073119 1099851 main.go:141] libmachine: Stopping "old-k8s-version-149000"...
	I0717 19:50:29.073154 1099851 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 19:50:29.075614 1099851 main.go:141] libmachine: (old-k8s-version-149000) Calling .Stop
	I0717 19:50:29.080064 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 0/60
	I0717 19:50:30.081714 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 1/60
	I0717 19:50:31.083401 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 2/60
	I0717 19:50:32.084758 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 3/60
	I0717 19:50:33.086730 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 4/60
	I0717 19:50:34.089266 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 5/60
	I0717 19:50:35.173272 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 6/60
	I0717 19:50:36.175654 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 7/60
	I0717 19:50:37.177102 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 8/60
	I0717 19:50:38.178703 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 9/60
	I0717 19:50:39.180891 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 10/60
	I0717 19:50:40.182580 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 11/60
	I0717 19:50:41.184560 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 12/60
	I0717 19:50:42.186219 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 13/60
	I0717 19:50:43.188592 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 14/60
	I0717 19:50:44.190903 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 15/60
	I0717 19:50:45.192741 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 16/60
	I0717 19:50:46.195249 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 17/60
	I0717 19:50:47.197655 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 18/60
	I0717 19:50:48.199209 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 19/60
	I0717 19:50:49.200929 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 20/60
	I0717 19:50:50.202612 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 21/60
	I0717 19:50:51.205132 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 22/60
	I0717 19:50:52.207648 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 23/60
	I0717 19:50:53.209626 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 24/60
	I0717 19:50:54.212143 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 25/60
	I0717 19:50:55.214006 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 26/60
	I0717 19:50:56.216709 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 27/60
	I0717 19:50:57.218299 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 28/60
	I0717 19:50:58.220419 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 29/60
	I0717 19:50:59.223161 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 30/60
	I0717 19:51:00.225174 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 31/60
	I0717 19:51:01.226927 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 32/60
	I0717 19:51:02.228553 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 33/60
	I0717 19:51:03.230352 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 34/60
	I0717 19:51:04.232730 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 35/60
	I0717 19:51:05.235389 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 36/60
	I0717 19:51:06.237167 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 37/60
	I0717 19:51:07.238821 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 38/60
	I0717 19:51:08.240682 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 39/60
	I0717 19:51:09.243377 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 40/60
	I0717 19:51:10.244845 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 41/60
	I0717 19:51:11.246846 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 42/60
	I0717 19:51:12.248379 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 43/60
	I0717 19:51:13.250741 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 44/60
	I0717 19:51:14.252795 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 45/60
	I0717 19:51:15.255264 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 46/60
	I0717 19:51:16.256989 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 47/60
	I0717 19:51:17.258787 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 48/60
	I0717 19:51:18.260490 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 49/60
	I0717 19:51:19.262967 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 50/60
	I0717 19:51:20.264534 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 51/60
	I0717 19:51:21.266927 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 52/60
	I0717 19:51:22.268929 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 53/60
	I0717 19:51:23.270428 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 54/60
	I0717 19:51:24.272666 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 55/60
	I0717 19:51:25.274258 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 56/60
	I0717 19:51:26.276020 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 57/60
	I0717 19:51:27.278079 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 58/60
	I0717 19:51:28.280233 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 59/60
	I0717 19:51:29.280848 1099851 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 19:51:29.280939 1099851 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:51:29.280968 1099851 retry.go:31] will retry after 1.365670723s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:51:30.647651 1099851 stop.go:39] StopHost: old-k8s-version-149000
	I0717 19:51:30.648142 1099851 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:51:30.648205 1099851 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:51:30.663044 1099851 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35721
	I0717 19:51:30.663567 1099851 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:51:30.664212 1099851 main.go:141] libmachine: Using API Version  1
	I0717 19:51:30.664239 1099851 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:51:30.664640 1099851 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:51:30.667564 1099851 out.go:177] * Stopping node "old-k8s-version-149000"  ...
	I0717 19:51:30.669465 1099851 main.go:141] libmachine: Stopping "old-k8s-version-149000"...
	I0717 19:51:30.669486 1099851 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 19:51:30.671348 1099851 main.go:141] libmachine: (old-k8s-version-149000) Calling .Stop
	I0717 19:51:30.675395 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 0/60
	I0717 19:51:31.677211 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 1/60
	I0717 19:51:32.678909 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 2/60
	I0717 19:51:33.680541 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 3/60
	I0717 19:51:34.682032 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 4/60
	I0717 19:51:35.684087 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 5/60
	I0717 19:51:36.686119 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 6/60
	I0717 19:51:37.687676 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 7/60
	I0717 19:51:38.689612 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 8/60
	I0717 19:51:39.691121 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 9/60
	I0717 19:51:40.693434 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 10/60
	I0717 19:51:41.694994 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 11/60
	I0717 19:51:42.696872 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 12/60
	I0717 19:51:43.698420 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 13/60
	I0717 19:51:44.700182 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 14/60
	I0717 19:51:45.702284 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 15/60
	I0717 19:51:46.703865 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 16/60
	I0717 19:51:47.706049 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 17/60
	I0717 19:51:48.708316 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 18/60
	I0717 19:51:49.709834 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 19/60
	I0717 19:51:50.712206 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 20/60
	I0717 19:51:51.713846 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 21/60
	I0717 19:51:52.716538 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 22/60
	I0717 19:51:53.718105 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 23/60
	I0717 19:51:54.720307 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 24/60
	I0717 19:51:55.722313 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 25/60
	I0717 19:51:56.724363 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 26/60
	I0717 19:51:57.726022 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 27/60
	I0717 19:51:58.727673 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 28/60
	I0717 19:51:59.729389 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 29/60
	I0717 19:52:00.731717 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 30/60
	I0717 19:52:01.733516 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 31/60
	I0717 19:52:02.735222 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 32/60
	I0717 19:52:03.737128 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 33/60
	I0717 19:52:04.739008 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 34/60
	I0717 19:52:05.742236 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 35/60
	I0717 19:52:06.743974 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 36/60
	I0717 19:52:07.745666 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 37/60
	I0717 19:52:08.747201 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 38/60
	I0717 19:52:09.748518 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 39/60
	I0717 19:52:10.750662 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 40/60
	I0717 19:52:11.752996 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 41/60
	I0717 19:52:12.754433 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 42/60
	I0717 19:52:13.756498 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 43/60
	I0717 19:52:14.758653 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 44/60
	I0717 19:52:15.760517 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 45/60
	I0717 19:52:16.762574 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 46/60
	I0717 19:52:17.764011 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 47/60
	I0717 19:52:18.765677 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 48/60
	I0717 19:52:19.767091 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 49/60
	I0717 19:52:20.769266 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 50/60
	I0717 19:52:21.770663 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 51/60
	I0717 19:52:22.772176 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 52/60
	I0717 19:52:23.773838 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 53/60
	I0717 19:52:24.775934 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 54/60
	I0717 19:52:25.777960 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 55/60
	I0717 19:52:26.779516 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 56/60
	I0717 19:52:27.781199 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 57/60
	I0717 19:52:28.782628 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 58/60
	I0717 19:52:29.784300 1099851 main.go:141] libmachine: (old-k8s-version-149000) Waiting for machine to stop 59/60
	I0717 19:52:30.785949 1099851 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 19:52:30.786003 1099851 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:52:30.788707 1099851 out.go:177] 
	W0717 19:52:30.790745 1099851 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 19:52:30.790773 1099851 out.go:239] * 
	* 
	W0717 19:52:30.794746 1099851 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:52:30.796906 1099851 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-149000 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-149000 -n old-k8s-version-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-149000 -n old-k8s-version-149000: exit status 3 (18.549749626s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:52:49.349979 1101013 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.177:22: connect: no route to host
	E0717 19:52:49.350004 1101013 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.177:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-149000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-408472 --alsologtostderr -v=3
E0717 19:51:00.133471 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 19:51:03.520660 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-408472 --alsologtostderr -v=3: exit status 82 (2m1.580305295s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-408472"  ...
	* Stopping node "no-preload-408472"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:50:56.154064 1100303 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:50:56.154253 1100303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:50:56.154267 1100303 out.go:309] Setting ErrFile to fd 2...
	I0717 19:50:56.154272 1100303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:50:56.154486 1100303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:50:56.154773 1100303 out.go:303] Setting JSON to false
	I0717 19:50:56.154860 1100303 mustload.go:65] Loading cluster: no-preload-408472
	I0717 19:50:56.155199 1100303 config.go:182] Loaded profile config "no-preload-408472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:50:56.155285 1100303 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/config.json ...
	I0717 19:50:56.155441 1100303 mustload.go:65] Loading cluster: no-preload-408472
	I0717 19:50:56.155551 1100303 config.go:182] Loaded profile config "no-preload-408472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:50:56.155575 1100303 stop.go:39] StopHost: no-preload-408472
	I0717 19:50:56.155996 1100303 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:50:56.156065 1100303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:50:56.172724 1100303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I0717 19:50:56.173254 1100303 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:50:56.174087 1100303 main.go:141] libmachine: Using API Version  1
	I0717 19:50:56.174129 1100303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:50:56.174474 1100303 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:50:56.178182 1100303 out.go:177] * Stopping node "no-preload-408472"  ...
	I0717 19:50:56.180646 1100303 main.go:141] libmachine: Stopping "no-preload-408472"...
	I0717 19:50:56.180688 1100303 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:50:56.183189 1100303 main.go:141] libmachine: (no-preload-408472) Calling .Stop
	I0717 19:50:56.188338 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 0/60
	I0717 19:50:57.189882 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 1/60
	I0717 19:50:58.192678 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 2/60
	I0717 19:50:59.194265 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 3/60
	I0717 19:51:00.196588 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 4/60
	I0717 19:51:01.199067 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 5/60
	I0717 19:51:02.201115 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 6/60
	I0717 19:51:03.203408 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 7/60
	I0717 19:51:04.205039 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 8/60
	I0717 19:51:05.206778 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 9/60
	I0717 19:51:06.209320 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 10/60
	I0717 19:51:07.211750 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 11/60
	I0717 19:51:08.213666 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 12/60
	I0717 19:51:09.215357 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 13/60
	I0717 19:51:10.216886 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 14/60
	I0717 19:51:11.219264 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 15/60
	I0717 19:51:12.220734 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 16/60
	I0717 19:51:13.222773 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 17/60
	I0717 19:51:14.224255 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 18/60
	I0717 19:51:15.226185 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 19/60
	I0717 19:51:16.228380 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 20/60
	I0717 19:51:17.230092 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 21/60
	I0717 19:51:18.231590 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 22/60
	I0717 19:51:19.233345 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 23/60
	I0717 19:51:20.235292 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 24/60
	I0717 19:51:21.237371 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 25/60
	I0717 19:51:22.239419 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 26/60
	I0717 19:51:23.240963 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 27/60
	I0717 19:51:24.242377 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 28/60
	I0717 19:51:25.244235 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 29/60
	I0717 19:51:26.246276 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 30/60
	I0717 19:51:27.248589 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 31/60
	I0717 19:51:28.250362 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 32/60
	I0717 19:51:29.252947 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 33/60
	I0717 19:51:30.254604 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 34/60
	I0717 19:51:31.257104 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 35/60
	I0717 19:51:32.259162 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 36/60
	I0717 19:51:33.261864 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 37/60
	I0717 19:51:34.264223 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 38/60
	I0717 19:51:35.266292 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 39/60
	I0717 19:51:36.268193 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 40/60
	I0717 19:51:37.270088 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 41/60
	I0717 19:51:38.272385 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 42/60
	I0717 19:51:39.274685 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 43/60
	I0717 19:51:40.276901 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 44/60
	I0717 19:51:41.279098 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 45/60
	I0717 19:51:42.281002 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 46/60
	I0717 19:51:43.282849 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 47/60
	I0717 19:51:44.284467 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 48/60
	I0717 19:51:45.286317 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 49/60
	I0717 19:51:46.288120 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 50/60
	I0717 19:51:47.289958 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 51/60
	I0717 19:51:48.291724 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 52/60
	I0717 19:51:49.293208 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 53/60
	I0717 19:51:50.295050 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 54/60
	I0717 19:51:51.297228 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 55/60
	I0717 19:51:52.298537 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 56/60
	I0717 19:51:53.300096 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 57/60
	I0717 19:51:54.301745 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 58/60
	I0717 19:51:55.303251 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 59/60
	I0717 19:51:56.304283 1100303 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 19:51:56.304343 1100303 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:51:56.304366 1100303 retry.go:31] will retry after 770.094778ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:51:57.075299 1100303 stop.go:39] StopHost: no-preload-408472
	I0717 19:51:57.075836 1100303 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:51:57.075899 1100303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:51:57.092024 1100303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43103
	I0717 19:51:57.093425 1100303 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:51:57.094196 1100303 main.go:141] libmachine: Using API Version  1
	I0717 19:51:57.094256 1100303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:51:57.094659 1100303 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:51:57.097897 1100303 out.go:177] * Stopping node "no-preload-408472"  ...
	I0717 19:51:57.099696 1100303 main.go:141] libmachine: Stopping "no-preload-408472"...
	I0717 19:51:57.099731 1100303 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:51:57.101904 1100303 main.go:141] libmachine: (no-preload-408472) Calling .Stop
	I0717 19:51:57.105771 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 0/60
	I0717 19:51:58.108555 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 1/60
	I0717 19:51:59.110550 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 2/60
	I0717 19:52:00.112425 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 3/60
	I0717 19:52:01.114327 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 4/60
	I0717 19:52:02.116432 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 5/60
	I0717 19:52:03.118140 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 6/60
	I0717 19:52:04.120191 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 7/60
	I0717 19:52:05.121803 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 8/60
	I0717 19:52:06.123466 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 9/60
	I0717 19:52:07.125761 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 10/60
	I0717 19:52:08.127458 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 11/60
	I0717 19:52:09.129138 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 12/60
	I0717 19:52:10.130732 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 13/60
	I0717 19:52:11.132550 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 14/60
	I0717 19:52:12.134366 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 15/60
	I0717 19:52:13.136606 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 16/60
	I0717 19:52:14.138283 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 17/60
	I0717 19:52:15.140595 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 18/60
	I0717 19:52:16.142599 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 19/60
	I0717 19:52:17.145209 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 20/60
	I0717 19:52:18.147024 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 21/60
	I0717 19:52:19.149420 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 22/60
	I0717 19:52:20.150980 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 23/60
	I0717 19:52:21.152503 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 24/60
	I0717 19:52:22.154230 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 25/60
	I0717 19:52:23.156045 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 26/60
	I0717 19:52:24.157660 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 27/60
	I0717 19:52:25.159577 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 28/60
	I0717 19:52:26.160806 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 29/60
	I0717 19:52:27.162683 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 30/60
	I0717 19:52:28.164882 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 31/60
	I0717 19:52:29.166647 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 32/60
	I0717 19:52:30.168970 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 33/60
	I0717 19:52:31.170631 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 34/60
	I0717 19:52:32.172264 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 35/60
	I0717 19:52:33.174123 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 36/60
	I0717 19:52:34.176466 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 37/60
	I0717 19:52:35.178192 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 38/60
	I0717 19:52:36.179704 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 39/60
	I0717 19:52:37.181796 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 40/60
	I0717 19:52:38.183738 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 41/60
	I0717 19:52:39.185632 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 42/60
	I0717 19:52:40.187302 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 43/60
	I0717 19:52:41.188802 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 44/60
	I0717 19:52:42.191246 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 45/60
	I0717 19:52:43.192858 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 46/60
	I0717 19:52:44.194695 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 47/60
	I0717 19:52:45.196166 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 48/60
	I0717 19:52:46.197719 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 49/60
	I0717 19:52:47.650152 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 50/60
	I0717 19:52:48.652672 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 51/60
	I0717 19:52:49.654350 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 52/60
	I0717 19:52:50.656446 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 53/60
	I0717 19:52:51.658470 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 54/60
	I0717 19:52:52.659901 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 55/60
	I0717 19:52:53.661612 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 56/60
	I0717 19:52:54.663342 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 57/60
	I0717 19:52:55.665583 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 58/60
	I0717 19:52:56.667022 1100303 main.go:141] libmachine: (no-preload-408472) Waiting for machine to stop 59/60
	I0717 19:52:57.668071 1100303 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 19:52:57.668141 1100303 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:52:57.671010 1100303 out.go:177] 
	W0717 19:52:57.673107 1100303 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 19:52:57.673135 1100303 out.go:239] * 
	* 
	W0717 19:52:57.678775 1100303 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:52:57.681265 1100303 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-408472 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-408472 -n no-preload-408472
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-408472 -n no-preload-408472: exit status 3 (18.545947929s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:53:16.229905 1101837 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.65:22: connect: no route to host
	E0717 19:53:16.229926 1101837 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.65:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-408472" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-711413 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-711413 --alsologtostderr -v=3: exit status 82 (2m0.958752131s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-711413"  ...
	* Stopping node "default-k8s-diff-port-711413"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:51:32.038947 1100548 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:51:32.039112 1100548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:51:32.039125 1100548 out.go:309] Setting ErrFile to fd 2...
	I0717 19:51:32.039131 1100548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:51:32.039379 1100548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:51:32.039673 1100548 out.go:303] Setting JSON to false
	I0717 19:51:32.039757 1100548 mustload.go:65] Loading cluster: default-k8s-diff-port-711413
	I0717 19:51:32.040143 1100548 config.go:182] Loaded profile config "default-k8s-diff-port-711413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:51:32.040235 1100548 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/config.json ...
	I0717 19:51:32.040391 1100548 mustload.go:65] Loading cluster: default-k8s-diff-port-711413
	I0717 19:51:32.040515 1100548 config.go:182] Loaded profile config "default-k8s-diff-port-711413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:51:32.040558 1100548 stop.go:39] StopHost: default-k8s-diff-port-711413
	I0717 19:51:32.040932 1100548 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:51:32.040997 1100548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:51:32.056358 1100548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I0717 19:51:32.056969 1100548 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:51:32.059662 1100548 main.go:141] libmachine: Using API Version  1
	I0717 19:51:32.059713 1100548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:51:32.060176 1100548 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:51:32.062751 1100548 out.go:177] * Stopping node "default-k8s-diff-port-711413"  ...
	I0717 19:51:32.064508 1100548 main.go:141] libmachine: Stopping "default-k8s-diff-port-711413"...
	I0717 19:51:32.064538 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:51:32.066397 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Stop
	I0717 19:51:32.070665 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 0/60
	I0717 19:51:33.072210 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 1/60
	I0717 19:51:34.073806 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 2/60
	I0717 19:51:35.075520 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 3/60
	I0717 19:51:36.077329 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 4/60
	I0717 19:51:37.079995 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 5/60
	I0717 19:51:38.081449 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 6/60
	I0717 19:51:39.083316 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 7/60
	I0717 19:51:40.085238 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 8/60
	I0717 19:51:41.086825 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 9/60
	I0717 19:51:42.088736 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 10/60
	I0717 19:51:43.090385 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 11/60
	I0717 19:51:44.092247 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 12/60
	I0717 19:51:45.094191 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 13/60
	I0717 19:51:46.095893 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 14/60
	I0717 19:51:47.098374 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 15/60
	I0717 19:51:48.100011 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 16/60
	I0717 19:51:49.101946 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 17/60
	I0717 19:51:50.103788 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 18/60
	I0717 19:51:51.105446 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 19/60
	I0717 19:51:52.107686 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 20/60
	I0717 19:51:53.109771 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 21/60
	I0717 19:51:54.112386 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 22/60
	I0717 19:51:55.114079 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 23/60
	I0717 19:51:56.115747 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 24/60
	I0717 19:51:57.118042 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 25/60
	I0717 19:51:58.119426 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 26/60
	I0717 19:51:59.120963 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 27/60
	I0717 19:52:00.123510 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 28/60
	I0717 19:52:01.124838 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 29/60
	I0717 19:52:02.126449 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 30/60
	I0717 19:52:03.128386 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 31/60
	I0717 19:52:04.130097 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 32/60
	I0717 19:52:05.132412 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 33/60
	I0717 19:52:06.134094 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 34/60
	I0717 19:52:07.136224 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 35/60
	I0717 19:52:08.137678 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 36/60
	I0717 19:52:09.139153 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 37/60
	I0717 19:52:10.140427 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 38/60
	I0717 19:52:11.141964 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 39/60
	I0717 19:52:12.143266 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 40/60
	I0717 19:52:13.144749 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 41/60
	I0717 19:52:14.147156 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 42/60
	I0717 19:52:15.148692 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 43/60
	I0717 19:52:16.150340 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 44/60
	I0717 19:52:17.152696 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 45/60
	I0717 19:52:18.154427 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 46/60
	I0717 19:52:19.156777 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 47/60
	I0717 19:52:20.158384 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 48/60
	I0717 19:52:21.160325 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 49/60
	I0717 19:52:22.161791 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 50/60
	I0717 19:52:23.164014 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 51/60
	I0717 19:52:24.165589 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 52/60
	I0717 19:52:25.166809 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 53/60
	I0717 19:52:26.168259 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 54/60
	I0717 19:52:27.170178 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 55/60
	I0717 19:52:28.172258 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 56/60
	I0717 19:52:29.173696 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 57/60
	I0717 19:52:30.176084 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 58/60
	I0717 19:52:31.177436 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 59/60
	I0717 19:52:32.178844 1100548 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 19:52:32.178900 1100548 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:52:32.178924 1100548 retry.go:31] will retry after 621.618112ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:52:32.800671 1100548 stop.go:39] StopHost: default-k8s-diff-port-711413
	I0717 19:52:32.801082 1100548 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:52:32.801141 1100548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:52:32.817355 1100548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38473
	I0717 19:52:32.817905 1100548 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:52:32.818542 1100548 main.go:141] libmachine: Using API Version  1
	I0717 19:52:32.818580 1100548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:52:32.818991 1100548 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:52:32.821766 1100548 out.go:177] * Stopping node "default-k8s-diff-port-711413"  ...
	I0717 19:52:32.824039 1100548 main.go:141] libmachine: Stopping "default-k8s-diff-port-711413"...
	I0717 19:52:32.824070 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:52:32.826619 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Stop
	I0717 19:52:32.830867 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 0/60
	I0717 19:52:33.832590 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 1/60
	I0717 19:52:34.834460 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 2/60
	I0717 19:52:35.836173 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 3/60
	I0717 19:52:36.838048 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 4/60
	I0717 19:52:37.839998 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 5/60
	I0717 19:52:38.841459 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 6/60
	I0717 19:52:39.843090 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 7/60
	I0717 19:52:40.845086 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 8/60
	I0717 19:52:41.846863 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 9/60
	I0717 19:52:42.849486 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 10/60
	I0717 19:52:43.851438 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 11/60
	I0717 19:52:44.853113 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 12/60
	I0717 19:52:45.855103 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 13/60
	I0717 19:52:46.856666 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 14/60
	I0717 19:52:47.859389 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 15/60
	I0717 19:52:48.861266 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 16/60
	I0717 19:52:49.863596 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 17/60
	I0717 19:52:50.865208 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 18/60
	I0717 19:52:51.866898 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 19/60
	I0717 19:52:52.868956 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 20/60
	I0717 19:52:53.870730 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 21/60
	I0717 19:52:54.872844 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 22/60
	I0717 19:52:55.874177 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 23/60
	I0717 19:52:56.875841 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 24/60
	I0717 19:52:57.877923 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 25/60
	I0717 19:52:58.879853 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 26/60
	I0717 19:52:59.881683 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 27/60
	I0717 19:53:00.883573 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 28/60
	I0717 19:53:01.885353 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 29/60
	I0717 19:53:02.887159 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 30/60
	I0717 19:53:03.888852 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 31/60
	I0717 19:53:04.890593 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 32/60
	I0717 19:53:05.892272 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 33/60
	I0717 19:53:06.893653 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 34/60
	I0717 19:53:07.894965 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 35/60
	I0717 19:53:08.896537 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 36/60
	I0717 19:53:09.898244 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 37/60
	I0717 19:53:10.900111 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 38/60
	I0717 19:53:11.901819 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 39/60
	I0717 19:53:12.904242 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 40/60
	I0717 19:53:13.905946 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 41/60
	I0717 19:53:14.908318 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 42/60
	I0717 19:53:15.910155 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 43/60
	I0717 19:53:16.911783 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 44/60
	I0717 19:53:17.914093 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 45/60
	I0717 19:53:18.916464 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 46/60
	I0717 19:53:19.918209 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 47/60
	I0717 19:53:20.920503 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 48/60
	I0717 19:53:21.922376 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 49/60
	I0717 19:53:22.924310 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 50/60
	I0717 19:53:23.926079 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 51/60
	I0717 19:53:24.928651 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 52/60
	I0717 19:53:25.930446 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 53/60
	I0717 19:53:26.932397 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 54/60
	I0717 19:53:27.934033 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 55/60
	I0717 19:53:28.935706 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 56/60
	I0717 19:53:29.937240 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 57/60
	I0717 19:53:30.938921 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 58/60
	I0717 19:53:31.941111 1100548 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for machine to stop 59/60
	I0717 19:53:32.941836 1100548 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 19:53:32.941893 1100548 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:53:32.944510 1100548 out.go:177] 
	W0717 19:53:32.946533 1100548 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 19:53:32.946555 1100548 out.go:239] * 
	* 
	W0717 19:53:32.950390 1100548 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:53:32.952675 1100548 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-711413 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-711413 -n default-k8s-diff-port-711413
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-711413 -n default-k8s-diff-port-711413: exit status 3 (18.602750615s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:53:51.557952 1102198 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.51:22: connect: no route to host
	E0717 19:53:51.557980 1102198 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.51:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-711413" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-149000 -n old-k8s-version-149000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-149000 -n old-k8s-version-149000: exit status 3 (3.200121107s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:52:52.550022 1101724 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.177:22: connect: no route to host
	E0717 19:52:52.550044 1101724 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.177:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-149000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-149000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.158739988s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.177:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-149000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-149000 -n old-k8s-version-149000
E0717 19:53:01.330662 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-149000 -n old-k8s-version-149000: exit status 3 (3.056755224s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:53:01.766048 1101865 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.177:22: connect: no route to host
	E0717 19:53:01.766075 1101865 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.177:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-149000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-408472 -n no-preload-408472
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-408472 -n no-preload-408472: exit status 3 (3.169111722s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:53:19.398041 1102026 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.65:22: connect: no route to host
	E0717 19:53:19.398065 1102026 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.65:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-408472 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-408472 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.156789669s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.65:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-408472 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-408472 -n no-preload-408472
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-408472 -n no-preload-408472: exit status 3 (3.058864827s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:53:28.613955 1102106 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.65:22: connect: no route to host
	E0717 19:53:28.613990 1102106 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.65:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-408472" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-711413 -n default-k8s-diff-port-711413
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-711413 -n default-k8s-diff-port-711413: exit status 3 (3.167314202s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:53:54.725982 1102301 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.51:22: connect: no route to host
	E0717 19:53:54.726002 1102301 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.51:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-711413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-711413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.157639501s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.51:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-711413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-711413 -n default-k8s-diff-port-711413
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-711413 -n default-k8s-diff-port-711413: exit status 3 (3.058117997s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:54:03.941984 1102386 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.51:22: connect: no route to host
	E0717 19:54:03.942008 1102386 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.51:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-711413" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-114855 --alsologtostderr -v=3
E0717 19:55:46.570303 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:56:00.134123 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 19:56:03.520090 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-114855 --alsologtostderr -v=3: exit status 82 (2m1.82506284s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-114855"  ...
	* Stopping node "embed-certs-114855"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:54:42.972801 1102642 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:54:42.972972 1102642 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:54:42.972986 1102642 out.go:309] Setting ErrFile to fd 2...
	I0717 19:54:42.972993 1102642 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:54:42.973764 1102642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:54:42.974376 1102642 out.go:303] Setting JSON to false
	I0717 19:54:42.974580 1102642 mustload.go:65] Loading cluster: embed-certs-114855
	I0717 19:54:42.974983 1102642 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:54:42.975081 1102642 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/config.json ...
	I0717 19:54:42.975275 1102642 mustload.go:65] Loading cluster: embed-certs-114855
	I0717 19:54:42.975405 1102642 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:54:42.975443 1102642 stop.go:39] StopHost: embed-certs-114855
	I0717 19:54:42.975837 1102642 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:54:42.975900 1102642 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:54:42.991023 1102642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36721
	I0717 19:54:42.991584 1102642 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:54:42.992437 1102642 main.go:141] libmachine: Using API Version  1
	I0717 19:54:42.992481 1102642 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:54:42.992900 1102642 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:54:42.995218 1102642 out.go:177] * Stopping node "embed-certs-114855"  ...
	I0717 19:54:42.997594 1102642 main.go:141] libmachine: Stopping "embed-certs-114855"...
	I0717 19:54:42.997628 1102642 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 19:54:42.999759 1102642 main.go:141] libmachine: (embed-certs-114855) Calling .Stop
	I0717 19:54:43.003856 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 0/60
	I0717 19:54:44.005708 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 1/60
	I0717 19:54:45.007321 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 2/60
	I0717 19:54:46.008728 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 3/60
	I0717 19:54:47.010195 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 4/60
	I0717 19:54:48.012858 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 5/60
	I0717 19:54:49.014482 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 6/60
	I0717 19:54:50.016192 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 7/60
	I0717 19:54:51.017843 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 8/60
	I0717 19:54:52.020530 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 9/60
	I0717 19:54:53.022633 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 10/60
	I0717 19:54:54.024205 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 11/60
	I0717 19:54:55.025893 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 12/60
	I0717 19:54:56.027588 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 13/60
	I0717 19:54:57.029304 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 14/60
	I0717 19:54:58.031968 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 15/60
	I0717 19:54:59.033692 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 16/60
	I0717 19:55:00.035601 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 17/60
	I0717 19:55:01.037637 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 18/60
	I0717 19:55:02.039500 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 19/60
	I0717 19:55:03.041545 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 20/60
	I0717 19:55:04.043456 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 21/60
	I0717 19:55:05.045450 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 22/60
	I0717 19:55:06.047371 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 23/60
	I0717 19:55:07.049330 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 24/60
	I0717 19:55:08.051624 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 25/60
	I0717 19:55:09.053450 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 26/60
	I0717 19:55:10.055153 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 27/60
	I0717 19:55:11.056881 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 28/60
	I0717 19:55:12.058789 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 29/60
	I0717 19:55:13.060781 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 30/60
	I0717 19:55:14.063284 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 31/60
	I0717 19:55:15.065119 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 32/60
	I0717 19:55:16.066595 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 33/60
	I0717 19:55:17.068469 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 34/60
	I0717 19:55:18.071133 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 35/60
	I0717 19:55:19.072642 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 36/60
	I0717 19:55:20.074343 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 37/60
	I0717 19:55:21.075944 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 38/60
	I0717 19:55:22.077685 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 39/60
	I0717 19:55:23.079901 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 40/60
	I0717 19:55:24.081494 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 41/60
	I0717 19:55:25.083511 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 42/60
	I0717 19:55:26.085118 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 43/60
	I0717 19:55:27.087120 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 44/60
	I0717 19:55:28.089903 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 45/60
	I0717 19:55:29.091653 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 46/60
	I0717 19:55:30.093314 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 47/60
	I0717 19:55:31.095006 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 48/60
	I0717 19:55:32.096666 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 49/60
	I0717 19:55:33.098236 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 50/60
	I0717 19:55:34.100146 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 51/60
	I0717 19:55:35.101789 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 52/60
	I0717 19:55:36.103271 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 53/60
	I0717 19:55:37.105161 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 54/60
	I0717 19:55:38.107695 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 55/60
	I0717 19:55:39.109281 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 56/60
	I0717 19:55:40.110821 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 57/60
	I0717 19:55:41.112492 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 58/60
	I0717 19:55:42.114241 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 59/60
	I0717 19:55:43.115041 1102642 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 19:55:43.115137 1102642 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:55:43.115166 1102642 retry.go:31] will retry after 1.486294136s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:55:44.602856 1102642 stop.go:39] StopHost: embed-certs-114855
	I0717 19:55:44.603307 1102642 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:55:44.603355 1102642 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:55:44.619566 1102642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40847
	I0717 19:55:44.620177 1102642 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:55:44.620804 1102642 main.go:141] libmachine: Using API Version  1
	I0717 19:55:44.620839 1102642 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:55:44.621195 1102642 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:55:44.624184 1102642 out.go:177] * Stopping node "embed-certs-114855"  ...
	I0717 19:55:44.626126 1102642 main.go:141] libmachine: Stopping "embed-certs-114855"...
	I0717 19:55:44.626150 1102642 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 19:55:44.628089 1102642 main.go:141] libmachine: (embed-certs-114855) Calling .Stop
	I0717 19:55:44.631474 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 0/60
	I0717 19:55:45.633341 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 1/60
	I0717 19:55:46.635269 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 2/60
	I0717 19:55:47.636936 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 3/60
	I0717 19:55:48.638548 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 4/60
	I0717 19:55:49.641039 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 5/60
	I0717 19:55:50.642866 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 6/60
	I0717 19:55:51.644360 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 7/60
	I0717 19:55:52.645973 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 8/60
	I0717 19:55:53.647518 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 9/60
	I0717 19:55:54.649893 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 10/60
	I0717 19:55:55.651850 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 11/60
	I0717 19:55:56.653481 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 12/60
	I0717 19:55:57.655174 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 13/60
	I0717 19:55:58.656795 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 14/60
	I0717 19:55:59.659143 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 15/60
	I0717 19:56:00.661299 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 16/60
	I0717 19:56:01.662955 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 17/60
	I0717 19:56:02.664942 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 18/60
	I0717 19:56:03.666518 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 19/60
	I0717 19:56:04.669024 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 20/60
	I0717 19:56:05.670890 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 21/60
	I0717 19:56:06.672547 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 22/60
	I0717 19:56:07.674298 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 23/60
	I0717 19:56:08.676115 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 24/60
	I0717 19:56:09.678175 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 25/60
	I0717 19:56:10.679885 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 26/60
	I0717 19:56:11.681674 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 27/60
	I0717 19:56:12.683288 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 28/60
	I0717 19:56:13.684957 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 29/60
	I0717 19:56:14.687109 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 30/60
	I0717 19:56:15.689047 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 31/60
	I0717 19:56:16.690869 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 32/60
	I0717 19:56:17.692538 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 33/60
	I0717 19:56:18.694111 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 34/60
	I0717 19:56:19.696437 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 35/60
	I0717 19:56:20.698251 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 36/60
	I0717 19:56:21.700056 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 37/60
	I0717 19:56:22.701616 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 38/60
	I0717 19:56:23.703224 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 39/60
	I0717 19:56:24.705299 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 40/60
	I0717 19:56:25.707253 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 41/60
	I0717 19:56:26.708881 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 42/60
	I0717 19:56:27.710856 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 43/60
	I0717 19:56:28.712393 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 44/60
	I0717 19:56:29.715047 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 45/60
	I0717 19:56:30.716821 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 46/60
	I0717 19:56:31.718469 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 47/60
	I0717 19:56:32.720337 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 48/60
	I0717 19:56:33.721981 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 49/60
	I0717 19:56:34.724358 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 50/60
	I0717 19:56:35.726218 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 51/60
	I0717 19:56:36.727729 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 52/60
	I0717 19:56:37.729442 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 53/60
	I0717 19:56:38.731156 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 54/60
	I0717 19:56:39.733465 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 55/60
	I0717 19:56:40.734891 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 56/60
	I0717 19:56:41.736303 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 57/60
	I0717 19:56:42.737956 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 58/60
	I0717 19:56:43.739524 1102642 main.go:141] libmachine: (embed-certs-114855) Waiting for machine to stop 59/60
	I0717 19:56:44.740882 1102642 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 19:56:44.740939 1102642 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:56:44.743503 1102642 out.go:177] 
	W0717 19:56:44.745487 1102642 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 19:56:44.745502 1102642 out.go:239] * 
	* 
	W0717 19:56:44.749410 1102642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:56:44.751566 1102642 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-114855 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-114855 -n embed-certs-114855
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-114855 -n embed-certs-114855: exit status 3 (18.547991882s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:57:03.302003 1102958 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host
	E0717 19:57:03.302028 1102958 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-114855" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-114855 -n embed-certs-114855
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-114855 -n embed-certs-114855: exit status 3 (3.167384466s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:57:06.469975 1103032 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host
	E0717 19:57:06.470001 1103032 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-114855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-114855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.157995388s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-114855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-114855 -n embed-certs-114855
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-114855 -n embed-certs-114855: exit status 3 (3.058045416s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:57:15.686106 1103101 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host
	E0717 19:57:15.686130 1103101 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.213:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-114855" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-408472 -n no-preload-408472
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-07-17 20:12:25.446499448 +0000 UTC m=+5346.755194997
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-408472 -n no-preload-408472
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-408472 logs -n 25
E0717 20:12:26.571442 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-408472 logs -n 25: (1.840166474s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-408472             | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:50 UTC | 17 Jul 23 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-408472                                   | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-711413  | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC |                     |
	|         | default-k8s-diff-port-711413                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-891260             | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-891260                  | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-891260 --memory=2200 --alsologtostderr   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-891260 sudo                              | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p                                                     | disable-driver-mounts-178387 | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | disable-driver-mounts-178387                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-149000             | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-149000                              | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-408472                  | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-408472                                   | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-711413       | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 20:03 UTC |
	|         | default-k8s-diff-port-711413                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-114855            | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 19:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-114855                 | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC | 17 Jul 23 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 19:57:15
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:57:15.731358 1103141 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:57:15.731568 1103141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:57:15.731580 1103141 out.go:309] Setting ErrFile to fd 2...
	I0717 19:57:15.731587 1103141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:57:15.731815 1103141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:57:15.732432 1103141 out.go:303] Setting JSON to false
	I0717 19:57:15.733539 1103141 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16787,"bootTime":1689607049,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:57:15.733642 1103141 start.go:138] virtualization: kvm guest
	I0717 19:57:15.737317 1103141 out.go:177] * [embed-certs-114855] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:57:15.739399 1103141 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:57:15.739429 1103141 notify.go:220] Checking for updates...
	I0717 19:57:15.741380 1103141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:57:15.743518 1103141 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:57:15.745436 1103141 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:57:15.747588 1103141 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:57:15.749399 1103141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:57:15.751806 1103141 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:57:15.752284 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:57:15.752344 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:57:15.767989 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42921
	I0717 19:57:15.768411 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:57:15.769006 1103141 main.go:141] libmachine: Using API Version  1
	I0717 19:57:15.769098 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:57:15.769495 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:57:15.769753 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:57:15.770054 1103141 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:57:15.770369 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:57:15.770414 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:57:15.785632 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40597
	I0717 19:57:15.786193 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:57:15.786746 1103141 main.go:141] libmachine: Using API Version  1
	I0717 19:57:15.786780 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:57:15.787144 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:57:15.787366 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:57:15.827764 1103141 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:57:15.829847 1103141 start.go:298] selected driver: kvm2
	I0717 19:57:15.829881 1103141 start.go:880] validating driver "kvm2" against &{Name:embed-certs-114855 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-11
4855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStrin
g:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:57:15.830064 1103141 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:57:15.830818 1103141 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:57:15.830919 1103141 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:57:15.846540 1103141 install.go:137] /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2 version is 1.30.1
	I0717 19:57:15.846983 1103141 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:57:15.847033 1103141 cni.go:84] Creating CNI manager for ""
	I0717 19:57:15.847067 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:57:15.847081 1103141 start_flags.go:319] config:
	{Name:embed-certs-114855 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-114855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:57:15.847306 1103141 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:57:15.849943 1103141 out.go:177] * Starting control plane node embed-certs-114855 in cluster embed-certs-114855
	I0717 19:57:14.309967 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:15.851794 1103141 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:57:15.851858 1103141 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 19:57:15.851874 1103141 cache.go:57] Caching tarball of preloaded images
	I0717 19:57:15.851987 1103141 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:57:15.852001 1103141 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:57:15.852143 1103141 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/config.json ...
	I0717 19:57:15.852383 1103141 start.go:365] acquiring machines lock for embed-certs-114855: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:57:17.381986 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:23.461901 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:26.533953 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:32.613932 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:35.685977 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:41.765852 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:44.837869 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:50.917965 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:53.921775 1102136 start.go:369] acquired machines lock for "no-preload-408472" in 4m25.126407357s
	I0717 19:57:53.921838 1102136 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:57:53.921845 1102136 fix.go:54] fixHost starting: 
	I0717 19:57:53.922267 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:57:53.922309 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:57:53.937619 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0717 19:57:53.938191 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:57:53.938815 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:57:53.938854 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:57:53.939222 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:57:53.939501 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:57:53.939704 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:57:53.941674 1102136 fix.go:102] recreateIfNeeded on no-preload-408472: state=Stopped err=<nil>
	I0717 19:57:53.941732 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	W0717 19:57:53.941961 1102136 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:57:53.944840 1102136 out.go:177] * Restarting existing kvm2 VM for "no-preload-408472" ...
	I0717 19:57:53.919175 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:57:53.919232 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:57:53.921597 1101908 machine.go:91] provisioned docker machine in 4m37.562634254s
	I0717 19:57:53.921653 1101908 fix.go:56] fixHost completed within 4m37.5908464s
	I0717 19:57:53.921659 1101908 start.go:83] releasing machines lock for "old-k8s-version-149000", held for 4m37.590895645s
	W0717 19:57:53.921680 1101908 start.go:688] error starting host: provision: host is not running
	W0717 19:57:53.921815 1101908 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 19:57:53.921826 1101908 start.go:703] Will try again in 5 seconds ...
	I0717 19:57:53.947202 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Start
	I0717 19:57:53.947561 1102136 main.go:141] libmachine: (no-preload-408472) Ensuring networks are active...
	I0717 19:57:53.948787 1102136 main.go:141] libmachine: (no-preload-408472) Ensuring network default is active
	I0717 19:57:53.949254 1102136 main.go:141] libmachine: (no-preload-408472) Ensuring network mk-no-preload-408472 is active
	I0717 19:57:53.949695 1102136 main.go:141] libmachine: (no-preload-408472) Getting domain xml...
	I0717 19:57:53.950763 1102136 main.go:141] libmachine: (no-preload-408472) Creating domain...
	I0717 19:57:55.256278 1102136 main.go:141] libmachine: (no-preload-408472) Waiting to get IP...
	I0717 19:57:55.257164 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:55.257506 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:55.257619 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:55.257495 1103281 retry.go:31] will retry after 210.861865ms: waiting for machine to come up
	I0717 19:57:55.470210 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:55.470771 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:55.470798 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:55.470699 1103281 retry.go:31] will retry after 348.064579ms: waiting for machine to come up
	I0717 19:57:55.820645 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:55.821335 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:55.821366 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:55.821251 1103281 retry.go:31] will retry after 340.460253ms: waiting for machine to come up
	I0717 19:57:56.163913 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:56.164380 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:56.164412 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:56.164331 1103281 retry.go:31] will retry after 551.813243ms: waiting for machine to come up
	I0717 19:57:56.718505 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:56.719004 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:56.719034 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:56.718953 1103281 retry.go:31] will retry after 640.277548ms: waiting for machine to come up
	I0717 19:57:57.360930 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:57.361456 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:57.361485 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:57.361395 1103281 retry.go:31] will retry after 590.296988ms: waiting for machine to come up
	I0717 19:57:57.953399 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:57.953886 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:57.953913 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:57.953811 1103281 retry.go:31] will retry after 884.386688ms: waiting for machine to come up
	I0717 19:57:58.923546 1101908 start.go:365] acquiring machines lock for old-k8s-version-149000: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:57:58.840158 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:58.840577 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:58.840610 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:58.840529 1103281 retry.go:31] will retry after 1.10470212s: waiting for machine to come up
	I0717 19:57:59.947457 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:59.947973 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:59.948001 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:59.947933 1103281 retry.go:31] will retry after 1.338375271s: waiting for machine to come up
	I0717 19:58:01.288616 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:01.289194 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:01.289226 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:01.289133 1103281 retry.go:31] will retry after 1.633127486s: waiting for machine to come up
	I0717 19:58:02.923621 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:02.924330 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:02.924365 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:02.924253 1103281 retry.go:31] will retry after 2.365924601s: waiting for machine to come up
	I0717 19:58:05.291979 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:05.292487 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:05.292519 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:05.292430 1103281 retry.go:31] will retry after 2.846623941s: waiting for machine to come up
	I0717 19:58:08.142536 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:08.143021 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:08.143050 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:08.142961 1103281 retry.go:31] will retry after 3.495052949s: waiting for machine to come up
	I0717 19:58:11.641858 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:11.642358 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:11.642384 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:11.642302 1103281 retry.go:31] will retry after 5.256158303s: waiting for machine to come up
	I0717 19:58:18.263277 1102415 start.go:369] acquired machines lock for "default-k8s-diff-port-711413" in 4m14.158154198s
	I0717 19:58:18.263342 1102415 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:58:18.263362 1102415 fix.go:54] fixHost starting: 
	I0717 19:58:18.263897 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:58:18.263950 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:58:18.280719 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33059
	I0717 19:58:18.281241 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:58:18.281819 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:58:18.281845 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:58:18.282261 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:58:18.282489 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:18.282657 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:58:18.284625 1102415 fix.go:102] recreateIfNeeded on default-k8s-diff-port-711413: state=Stopped err=<nil>
	I0717 19:58:18.284655 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	W0717 19:58:18.284839 1102415 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:58:18.288135 1102415 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-711413" ...
	I0717 19:58:16.902597 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.903197 1102136 main.go:141] libmachine: (no-preload-408472) Found IP for machine: 192.168.61.65
	I0717 19:58:16.903226 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has current primary IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.903232 1102136 main.go:141] libmachine: (no-preload-408472) Reserving static IP address...
	I0717 19:58:16.903758 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "no-preload-408472", mac: "52:54:00:36:75:ac", ip: "192.168.61.65"} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:16.903794 1102136 main.go:141] libmachine: (no-preload-408472) Reserved static IP address: 192.168.61.65
	I0717 19:58:16.903806 1102136 main.go:141] libmachine: (no-preload-408472) DBG | skip adding static IP to network mk-no-preload-408472 - found existing host DHCP lease matching {name: "no-preload-408472", mac: "52:54:00:36:75:ac", ip: "192.168.61.65"}
	I0717 19:58:16.903820 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Getting to WaitForSSH function...
	I0717 19:58:16.903830 1102136 main.go:141] libmachine: (no-preload-408472) Waiting for SSH to be available...
	I0717 19:58:16.906385 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.906796 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:16.906833 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.906966 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Using SSH client type: external
	I0717 19:58:16.907000 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa (-rw-------)
	I0717 19:58:16.907034 1102136 main.go:141] libmachine: (no-preload-408472) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:58:16.907056 1102136 main.go:141] libmachine: (no-preload-408472) DBG | About to run SSH command:
	I0717 19:58:16.907116 1102136 main.go:141] libmachine: (no-preload-408472) DBG | exit 0
	I0717 19:58:16.998306 1102136 main.go:141] libmachine: (no-preload-408472) DBG | SSH cmd err, output: <nil>: 
	I0717 19:58:16.998744 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetConfigRaw
	I0717 19:58:16.999490 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:17.002697 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.003108 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.003156 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.003405 1102136 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/config.json ...
	I0717 19:58:17.003642 1102136 machine.go:88] provisioning docker machine ...
	I0717 19:58:17.003668 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:17.003989 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetMachineName
	I0717 19:58:17.004208 1102136 buildroot.go:166] provisioning hostname "no-preload-408472"
	I0717 19:58:17.004234 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetMachineName
	I0717 19:58:17.004464 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.007043 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.007337 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.007371 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.007517 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.007730 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.007933 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.008071 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.008245 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:17.008906 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:17.008927 1102136 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-408472 && echo "no-preload-408472" | sudo tee /etc/hostname
	I0717 19:58:17.143779 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-408472
	
	I0717 19:58:17.143816 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.146881 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.147332 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.147384 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.147556 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.147807 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.147990 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.148137 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.148320 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:17.148789 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:17.148811 1102136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-408472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-408472/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-408472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:58:17.279254 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:58:17.279292 1102136 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:58:17.279339 1102136 buildroot.go:174] setting up certificates
	I0717 19:58:17.279375 1102136 provision.go:83] configureAuth start
	I0717 19:58:17.279390 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetMachineName
	I0717 19:58:17.279745 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:17.283125 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.283563 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.283610 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.283837 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.286508 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.286931 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.286975 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.287088 1102136 provision.go:138] copyHostCerts
	I0717 19:58:17.287196 1102136 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:58:17.287210 1102136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:58:17.287299 1102136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:58:17.287430 1102136 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:58:17.287443 1102136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:58:17.287486 1102136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:58:17.287634 1102136 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:58:17.287650 1102136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:58:17.287691 1102136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:58:17.287762 1102136 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.no-preload-408472 san=[192.168.61.65 192.168.61.65 localhost 127.0.0.1 minikube no-preload-408472]
	I0717 19:58:17.492065 1102136 provision.go:172] copyRemoteCerts
	I0717 19:58:17.492172 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:58:17.492209 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.495444 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.495931 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.495971 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.496153 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.496406 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.496605 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.496793 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:58:17.588540 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:58:17.613378 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 19:58:17.638066 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:58:17.662222 1102136 provision.go:86] duration metric: configureAuth took 382.813668ms
	I0717 19:58:17.662267 1102136 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:58:17.662522 1102136 config.go:182] Loaded profile config "no-preload-408472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:58:17.662613 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.665914 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.666415 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.666475 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.666673 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.666934 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.667122 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.667287 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.667466 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:17.667885 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:17.667903 1102136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:58:17.997416 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:58:17.997461 1102136 machine.go:91] provisioned docker machine in 993.802909ms
	I0717 19:58:17.997476 1102136 start.go:300] post-start starting for "no-preload-408472" (driver="kvm2")
	I0717 19:58:17.997490 1102136 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:58:17.997533 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:17.997925 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:58:17.998013 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.000755 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.001185 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.001210 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.001409 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.001682 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.001892 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.002059 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:58:18.093738 1102136 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:58:18.098709 1102136 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:58:18.098744 1102136 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:58:18.098854 1102136 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:58:18.098974 1102136 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:58:18.099098 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:58:18.110195 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:18.135572 1102136 start.go:303] post-start completed in 138.074603ms
	I0717 19:58:18.135628 1102136 fix.go:56] fixHost completed within 24.21376423s
	I0717 19:58:18.135652 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.139033 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.139617 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.139656 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.139847 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.140146 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.140366 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.140612 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.140819 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:18.141265 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:18.141282 1102136 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:58:18.263053 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623898.247474645
	
	I0717 19:58:18.263080 1102136 fix.go:206] guest clock: 1689623898.247474645
	I0717 19:58:18.263096 1102136 fix.go:219] Guest: 2023-07-17 19:58:18.247474645 +0000 UTC Remote: 2023-07-17 19:58:18.135632998 +0000 UTC m=+289.513196741 (delta=111.841647ms)
	I0717 19:58:18.263124 1102136 fix.go:190] guest clock delta is within tolerance: 111.841647ms
	I0717 19:58:18.263132 1102136 start.go:83] releasing machines lock for "no-preload-408472", held for 24.341313825s
	I0717 19:58:18.263184 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.263451 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:18.266352 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.266707 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.266732 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.266920 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.267684 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.267935 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.268033 1102136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:58:18.268095 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.268205 1102136 ssh_runner.go:195] Run: cat /version.json
	I0717 19:58:18.268249 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.270983 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271223 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271324 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.271385 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271494 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.271608 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.271628 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271697 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.271879 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.271895 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.272094 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.272099 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:58:18.272253 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.272419 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	W0717 19:58:18.395775 1102136 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:58:18.395916 1102136 ssh_runner.go:195] Run: systemctl --version
	I0717 19:58:18.403799 1102136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:58:18.557449 1102136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:58:18.564470 1102136 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:58:18.564580 1102136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:58:18.580344 1102136 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:58:18.580386 1102136 start.go:469] detecting cgroup driver to use...
	I0717 19:58:18.580482 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:58:18.595052 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:58:18.608844 1102136 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:58:18.608948 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:58:18.621908 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:58:18.635796 1102136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:58:18.290375 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Start
	I0717 19:58:18.290615 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Ensuring networks are active...
	I0717 19:58:18.291470 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Ensuring network default is active
	I0717 19:58:18.292041 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Ensuring network mk-default-k8s-diff-port-711413 is active
	I0717 19:58:18.292477 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Getting domain xml...
	I0717 19:58:18.293393 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Creating domain...
	I0717 19:58:18.751368 1102136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:58:18.878097 1102136 docker.go:212] disabling docker service ...
	I0717 19:58:18.878186 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:58:18.895094 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:58:18.909958 1102136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:58:19.032014 1102136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:58:19.141917 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:58:19.158474 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:58:19.178688 1102136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:58:19.178767 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.189949 1102136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:58:19.190059 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.201270 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.212458 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.226193 1102136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:58:19.239919 1102136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:58:19.251627 1102136 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:58:19.251711 1102136 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:58:19.268984 1102136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:58:19.281898 1102136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:58:19.390523 1102136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:58:19.599827 1102136 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:58:19.599937 1102136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:58:19.605741 1102136 start.go:537] Will wait 60s for crictl version
	I0717 19:58:19.605810 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:19.610347 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:58:19.653305 1102136 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:58:19.653418 1102136 ssh_runner.go:195] Run: crio --version
	I0717 19:58:19.712418 1102136 ssh_runner.go:195] Run: crio --version
	I0717 19:58:19.773012 1102136 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:58:19.775099 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:19.778530 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:19.779127 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:19.779167 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:19.779477 1102136 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 19:58:19.784321 1102136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:19.797554 1102136 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:58:19.797682 1102136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:58:19.833548 1102136 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:58:19.833590 1102136 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:19.833749 1102136 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:19.833760 1102136 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:19.833787 1102136 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0717 19:58:19.833821 1102136 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:19.835432 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:19.835461 1102136 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:19.835497 1102136 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:19.835492 1102136 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0717 19:58:19.835432 1102136 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:19.835432 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:19.835463 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:19.835436 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.032458 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.032526 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:20.035507 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:20.035509 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:20.041878 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:20.056915 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0717 19:58:20.099112 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:20.119661 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:20.195250 1102136 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0717 19:58:20.195338 1102136 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0717 19:58:20.195384 1102136 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:20.195441 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.195348 1102136 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.195521 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.212109 1102136 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0717 19:58:20.212185 1102136 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:20.212255 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.232021 1102136 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0717 19:58:20.232077 1102136 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:20.232126 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.232224 1102136 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0717 19:58:20.232257 1102136 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:20.232287 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.363363 1102136 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0717 19:58:20.363425 1102136 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:20.363470 1102136 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 19:58:20.363498 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:20.363529 1102136 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:20.363483 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.363579 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.363660 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:20.363569 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.363722 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:20.363783 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:20.368457 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:20.469461 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0717 19:58:20.469647 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 19:58:20.476546 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0717 19:58:20.476613 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:20.476657 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0717 19:58:20.476703 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.7-0
	I0717 19:58:20.476751 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 19:58:20.476824 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0717 19:58:20.476918 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0717 19:58:20.483915 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0717 19:58:20.483949 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.27.3 (exists)
	I0717 19:58:20.483966 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 19:58:20.483970 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0717 19:58:20.484015 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 19:58:20.484030 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 19:58:20.484067 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 19:58:20.532090 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0717 19:58:20.532113 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.7-0 (exists)
	I0717 19:58:20.532194 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 19:58:20.532213 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.27.3 (exists)
	I0717 19:58:20.532304 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:58:19.668342 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting to get IP...
	I0717 19:58:19.669327 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.669868 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.669996 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:19.669860 1103407 retry.go:31] will retry after 270.908859ms: waiting for machine to come up
	I0717 19:58:19.942914 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.943490 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.943524 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:19.943434 1103407 retry.go:31] will retry after 387.572792ms: waiting for machine to come up
	I0717 19:58:20.333302 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.333904 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.333934 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:20.333842 1103407 retry.go:31] will retry after 325.807844ms: waiting for machine to come up
	I0717 19:58:20.661438 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.661890 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.661926 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:20.661828 1103407 retry.go:31] will retry after 492.482292ms: waiting for machine to come up
	I0717 19:58:21.155613 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.156184 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.156212 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:21.156089 1103407 retry.go:31] will retry after 756.388438ms: waiting for machine to come up
	I0717 19:58:21.914212 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.914770 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.914806 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:21.914695 1103407 retry.go:31] will retry after 754.504649ms: waiting for machine to come up
	I0717 19:58:22.670913 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:22.671334 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:22.671369 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:22.671278 1103407 retry.go:31] will retry after 790.272578ms: waiting for machine to come up
	I0717 19:58:23.463657 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:23.464118 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:23.464145 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:23.464042 1103407 retry.go:31] will retry after 1.267289365s: waiting for machine to come up
	I0717 19:58:23.707718 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3: (3.223672376s)
	I0717 19:58:23.707750 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 from cache
	I0717 19:58:23.707788 1102136 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0717 19:58:23.707804 1102136 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3: (3.223748615s)
	I0717 19:58:23.707842 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.27.3 (exists)
	I0717 19:58:23.707856 1102136 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3: (3.223769648s)
	I0717 19:58:23.707862 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0717 19:58:23.707878 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.27.3 (exists)
	I0717 19:58:23.707908 1102136 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.175586566s)
	I0717 19:58:23.707926 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 19:58:24.960652 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.252755334s)
	I0717 19:58:24.960691 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0717 19:58:24.960722 1102136 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.7-0
	I0717 19:58:24.960770 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0
	I0717 19:58:24.733590 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:24.734140 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:24.734176 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:24.734049 1103407 retry.go:31] will retry after 1.733875279s: waiting for machine to come up
	I0717 19:58:26.470148 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:26.470587 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:26.470640 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:26.470522 1103407 retry.go:31] will retry after 1.829632979s: waiting for machine to come up
	I0717 19:58:28.301973 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:28.302506 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:28.302560 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:28.302421 1103407 retry.go:31] will retry after 2.201530837s: waiting for machine to come up
	I0717 19:58:32.118558 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0: (7.157750323s)
	I0717 19:58:32.118606 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 from cache
	I0717 19:58:32.118641 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 19:58:32.118700 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 19:58:33.577369 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3: (1.458638516s)
	I0717 19:58:33.577400 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 from cache
	I0717 19:58:33.577447 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 19:58:33.577595 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 19:58:30.507029 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:30.507586 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:30.507647 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:30.507447 1103407 retry.go:31] will retry after 2.947068676s: waiting for machine to come up
	I0717 19:58:33.456714 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:33.457232 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:33.457261 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:33.457148 1103407 retry.go:31] will retry after 3.074973516s: waiting for machine to come up
	I0717 19:58:37.871095 1103141 start.go:369] acquired machines lock for "embed-certs-114855" in 1m22.018672602s
	I0717 19:58:37.871161 1103141 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:58:37.871175 1103141 fix.go:54] fixHost starting: 
	I0717 19:58:37.871619 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:58:37.871689 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:58:37.889865 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46381
	I0717 19:58:37.890334 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:58:37.891044 1103141 main.go:141] libmachine: Using API Version  1
	I0717 19:58:37.891070 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:58:37.891471 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:58:37.891734 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:58:37.891927 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 19:58:37.893736 1103141 fix.go:102] recreateIfNeeded on embed-certs-114855: state=Stopped err=<nil>
	I0717 19:58:37.893779 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	W0717 19:58:37.893994 1103141 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:58:37.896545 1103141 out.go:177] * Restarting existing kvm2 VM for "embed-certs-114855" ...
	I0717 19:58:35.345141 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3: (1.767506173s)
	I0717 19:58:35.345180 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 from cache
	I0717 19:58:35.345211 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 19:58:35.345273 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 19:58:37.803066 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3: (2.457743173s)
	I0717 19:58:37.803106 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 from cache
	I0717 19:58:37.803139 1102136 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:58:37.803193 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:58:38.559165 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 19:58:38.559222 1102136 cache_images.go:123] Successfully loaded all cached images
	I0717 19:58:38.559231 1102136 cache_images.go:92] LoadImages completed in 18.725611601s
	I0717 19:58:38.559363 1102136 ssh_runner.go:195] Run: crio config
	I0717 19:58:38.630364 1102136 cni.go:84] Creating CNI manager for ""
	I0717 19:58:38.630394 1102136 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:58:38.630421 1102136 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:58:38.630447 1102136 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.65 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-408472 NodeName:no-preload-408472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:58:38.630640 1102136 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-408472"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:58:38.630739 1102136 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-408472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:no-preload-408472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:58:38.630813 1102136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:58:38.643343 1102136 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:58:38.643443 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:58:38.653495 1102136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0717 19:58:36.535628 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.536224 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Found IP for machine: 192.168.72.51
	I0717 19:58:36.536256 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Reserving static IP address...
	I0717 19:58:36.536278 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has current primary IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.536720 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-711413", mac: "52:54:00:7d:d7:a9", ip: "192.168.72.51"} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.536756 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | skip adding static IP to network mk-default-k8s-diff-port-711413 - found existing host DHCP lease matching {name: "default-k8s-diff-port-711413", mac: "52:54:00:7d:d7:a9", ip: "192.168.72.51"}
	I0717 19:58:36.536773 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Reserved static IP address: 192.168.72.51
	I0717 19:58:36.536791 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for SSH to be available...
	I0717 19:58:36.536804 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Getting to WaitForSSH function...
	I0717 19:58:36.540038 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.540593 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.540649 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.540764 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Using SSH client type: external
	I0717 19:58:36.540799 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa (-rw-------)
	I0717 19:58:36.540855 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:58:36.540876 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | About to run SSH command:
	I0717 19:58:36.540895 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | exit 0
	I0717 19:58:36.637774 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | SSH cmd err, output: <nil>: 
	I0717 19:58:36.638200 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetConfigRaw
	I0717 19:58:36.638931 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:36.642048 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.642530 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.642560 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.642850 1102415 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/config.json ...
	I0717 19:58:36.643061 1102415 machine.go:88] provisioning docker machine ...
	I0717 19:58:36.643080 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:36.643344 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetMachineName
	I0717 19:58:36.643516 1102415 buildroot.go:166] provisioning hostname "default-k8s-diff-port-711413"
	I0717 19:58:36.643535 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetMachineName
	I0717 19:58:36.643766 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:36.646810 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.647205 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.647243 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.647582 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:36.647826 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.648082 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.648275 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:36.648470 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:36.648883 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:36.648898 1102415 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-711413 && echo "default-k8s-diff-port-711413" | sudo tee /etc/hostname
	I0717 19:58:36.784478 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-711413
	
	I0717 19:58:36.784524 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:36.787641 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.788065 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.788118 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.788363 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:36.788605 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.788799 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.788942 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:36.789239 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:36.789869 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:36.789916 1102415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-711413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-711413/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-711413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:58:36.923177 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:58:36.923211 1102415 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:58:36.923237 1102415 buildroot.go:174] setting up certificates
	I0717 19:58:36.923248 1102415 provision.go:83] configureAuth start
	I0717 19:58:36.923257 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetMachineName
	I0717 19:58:36.923633 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:36.927049 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.927406 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.927443 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.927641 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:36.930158 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.930705 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.930771 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.930844 1102415 provision.go:138] copyHostCerts
	I0717 19:58:36.930969 1102415 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:58:36.930984 1102415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:58:36.931064 1102415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:58:36.931188 1102415 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:58:36.931201 1102415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:58:36.931235 1102415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:58:36.931315 1102415 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:58:36.931325 1102415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:58:36.931353 1102415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:58:36.931423 1102415 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-711413 san=[192.168.72.51 192.168.72.51 localhost 127.0.0.1 minikube default-k8s-diff-port-711413]
	I0717 19:58:37.043340 1102415 provision.go:172] copyRemoteCerts
	I0717 19:58:37.043444 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:58:37.043487 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.047280 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.047842 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.047879 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.048143 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.048410 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.048648 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.048844 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:58:37.147255 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:58:37.175437 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 19:58:37.202827 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:58:37.231780 1102415 provision.go:86] duration metric: configureAuth took 308.515103ms
	I0717 19:58:37.231818 1102415 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:58:37.232118 1102415 config.go:182] Loaded profile config "default-k8s-diff-port-711413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:58:37.232255 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.235364 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.235964 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.236011 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.236296 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.236533 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.236793 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.236976 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.237175 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:37.237831 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:37.237866 1102415 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:58:37.601591 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:58:37.601631 1102415 machine.go:91] provisioned docker machine in 958.556319ms
	I0717 19:58:37.601644 1102415 start.go:300] post-start starting for "default-k8s-diff-port-711413" (driver="kvm2")
	I0717 19:58:37.601665 1102415 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:58:37.601692 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.602018 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:58:37.602048 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.604964 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.605335 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.605387 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.605486 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.605822 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.606033 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.606224 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:58:37.696316 1102415 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:58:37.701409 1102415 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:58:37.701442 1102415 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:58:37.701579 1102415 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:58:37.701694 1102415 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:58:37.701827 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:58:37.711545 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:37.739525 1102415 start.go:303] post-start completed in 137.838589ms
	I0717 19:58:37.739566 1102415 fix.go:56] fixHost completed within 19.476203721s
	I0717 19:58:37.739599 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.742744 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.743095 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.743127 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.743298 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.743568 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.743768 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.743929 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.744164 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:37.744786 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:37.744809 1102415 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:58:37.870894 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623917.842259641
	
	I0717 19:58:37.870923 1102415 fix.go:206] guest clock: 1689623917.842259641
	I0717 19:58:37.870931 1102415 fix.go:219] Guest: 2023-07-17 19:58:37.842259641 +0000 UTC Remote: 2023-07-17 19:58:37.739572977 +0000 UTC m=+273.789942316 (delta=102.686664ms)
	I0717 19:58:37.870992 1102415 fix.go:190] guest clock delta is within tolerance: 102.686664ms
	I0717 19:58:37.871004 1102415 start.go:83] releasing machines lock for "default-k8s-diff-port-711413", held for 19.607687828s
	I0717 19:58:37.871044 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.871350 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:37.874527 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.874967 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.875029 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.875202 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.875791 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.876007 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.876141 1102415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:58:37.876211 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.876261 1102415 ssh_runner.go:195] Run: cat /version.json
	I0717 19:58:37.876289 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.879243 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.879483 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.879717 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.879752 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.879861 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.880090 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.880098 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.880118 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.880204 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.880335 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.880427 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.880513 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:58:37.880582 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.880714 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	W0717 19:58:37.967909 1102415 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:58:37.968017 1102415 ssh_runner.go:195] Run: systemctl --version
	I0717 19:58:37.997996 1102415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:58:38.148654 1102415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:58:38.156049 1102415 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:58:38.156151 1102415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:58:38.177835 1102415 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:58:38.177866 1102415 start.go:469] detecting cgroup driver to use...
	I0717 19:58:38.177945 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:58:38.196359 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:58:38.209697 1102415 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:58:38.209777 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:58:38.226250 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:58:38.244868 1102415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:58:38.385454 1102415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:58:38.527891 1102415 docker.go:212] disabling docker service ...
	I0717 19:58:38.527973 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:58:38.546083 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:58:38.562767 1102415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:58:38.702706 1102415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:58:38.828923 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:58:38.845137 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:58:38.866427 1102415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:58:38.866511 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.878067 1102415 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:58:38.878157 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.892494 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.905822 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.917786 1102415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:58:38.931418 1102415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:58:38.945972 1102415 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:58:38.946039 1102415 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:58:38.964498 1102415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:58:38.977323 1102415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:58:39.098593 1102415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:58:39.320821 1102415 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:58:39.320909 1102415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:58:39.327195 1102415 start.go:537] Will wait 60s for crictl version
	I0717 19:58:39.327285 1102415 ssh_runner.go:195] Run: which crictl
	I0717 19:58:39.333466 1102415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:58:39.372542 1102415 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:58:39.372643 1102415 ssh_runner.go:195] Run: crio --version
	I0717 19:58:39.419356 1102415 ssh_runner.go:195] Run: crio --version
	I0717 19:58:39.467405 1102415 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:58:37.898938 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Start
	I0717 19:58:37.899185 1103141 main.go:141] libmachine: (embed-certs-114855) Ensuring networks are active...
	I0717 19:58:37.900229 1103141 main.go:141] libmachine: (embed-certs-114855) Ensuring network default is active
	I0717 19:58:37.900690 1103141 main.go:141] libmachine: (embed-certs-114855) Ensuring network mk-embed-certs-114855 is active
	I0717 19:58:37.901444 1103141 main.go:141] libmachine: (embed-certs-114855) Getting domain xml...
	I0717 19:58:37.902311 1103141 main.go:141] libmachine: (embed-certs-114855) Creating domain...
	I0717 19:58:39.293109 1103141 main.go:141] libmachine: (embed-certs-114855) Waiting to get IP...
	I0717 19:58:39.294286 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:39.294784 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:39.294877 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:39.294761 1103558 retry.go:31] will retry after 201.93591ms: waiting for machine to come up
	I0717 19:58:39.498428 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:39.499066 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:39.499123 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:39.498979 1103558 retry.go:31] will retry after 321.702493ms: waiting for machine to come up
	I0717 19:58:39.822708 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:39.823258 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:39.823287 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:39.823212 1103558 retry.go:31] will retry after 477.114259ms: waiting for machine to come up
	I0717 19:58:40.302080 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:40.302727 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:40.302755 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:40.302668 1103558 retry.go:31] will retry after 554.321931ms: waiting for machine to come up
	I0717 19:58:38.674825 1102136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:58:38.697168 1102136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0717 19:58:38.719030 1102136 ssh_runner.go:195] Run: grep 192.168.61.65	control-plane.minikube.internal$ /etc/hosts
	I0717 19:58:38.724312 1102136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:38.742726 1102136 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472 for IP: 192.168.61.65
	I0717 19:58:38.742830 1102136 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:58:38.743029 1102136 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:58:38.743082 1102136 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:58:38.743238 1102136 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.key
	I0717 19:58:38.743316 1102136 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/apiserver.key.71349e66
	I0717 19:58:38.743370 1102136 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/proxy-client.key
	I0717 19:58:38.743527 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:58:38.743579 1102136 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:58:38.743597 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:58:38.743631 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:58:38.743667 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:58:38.743699 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:58:38.743759 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:38.744668 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:58:38.773602 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:58:38.799675 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:58:38.826050 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:58:38.856973 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:58:38.886610 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:58:38.916475 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:58:38.945986 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:58:38.973415 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:58:39.002193 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:58:39.030265 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:58:39.062896 1102136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:58:39.082877 1102136 ssh_runner.go:195] Run: openssl version
	I0717 19:58:39.090088 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:58:39.104372 1102136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:58:39.110934 1102136 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:58:39.111023 1102136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:58:39.117702 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:58:39.132094 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:58:39.149143 1102136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:58:39.155238 1102136 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:58:39.155359 1102136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:58:39.164149 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:58:39.178830 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:58:39.192868 1102136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:39.199561 1102136 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:39.199663 1102136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:39.208054 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:58:39.220203 1102136 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:58:39.228030 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:58:39.235220 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:58:39.243450 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:58:39.250709 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:58:39.260912 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:58:39.269318 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:58:39.277511 1102136 kubeadm.go:404] StartCluster: {Name:no-preload-408472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-408472 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.65 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:58:39.277701 1102136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:58:39.277789 1102136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:39.317225 1102136 cri.go:89] found id: ""
	I0717 19:58:39.317321 1102136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:58:39.330240 1102136 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:58:39.330274 1102136 kubeadm.go:636] restartCluster start
	I0717 19:58:39.330351 1102136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:58:39.343994 1102136 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:39.345762 1102136 kubeconfig.go:92] found "no-preload-408472" server: "https://192.168.61.65:8443"
	I0717 19:58:39.350027 1102136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:58:39.360965 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:39.361039 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:39.375103 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:39.875778 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:39.875891 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:39.892869 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:40.375344 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:40.375421 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:40.392992 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:40.875474 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:40.875590 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:40.892666 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:41.375224 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:41.375335 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:41.393833 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:41.875377 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:41.875466 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:41.893226 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:42.375846 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:42.375957 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:42.390397 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:42.876105 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:42.876220 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:42.889082 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:43.375654 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:43.375774 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:43.392598 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:39.469543 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:39.472792 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:39.473333 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:39.473386 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:39.473640 1102415 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:58:39.478276 1102415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:39.491427 1102415 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:58:39.491514 1102415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:58:39.527759 1102415 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:58:39.527856 1102415 ssh_runner.go:195] Run: which lz4
	I0717 19:58:39.532935 1102415 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:58:39.537733 1102415 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:58:39.537785 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 19:58:41.480847 1102415 crio.go:444] Took 1.947975 seconds to copy over tarball
	I0717 19:58:41.480932 1102415 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:58:40.858380 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:40.858925 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:40.858970 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:40.858865 1103558 retry.go:31] will retry after 616.432145ms: waiting for machine to come up
	I0717 19:58:41.476868 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:41.477399 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:41.477434 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:41.477348 1103558 retry.go:31] will retry after 780.737319ms: waiting for machine to come up
	I0717 19:58:42.259853 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:42.260278 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:42.260310 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:42.260216 1103558 retry.go:31] will retry after 858.918849ms: waiting for machine to come up
	I0717 19:58:43.120599 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:43.121211 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:43.121247 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:43.121155 1103558 retry.go:31] will retry after 1.359881947s: waiting for machine to come up
	I0717 19:58:44.482733 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:44.483173 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:44.483203 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:44.483095 1103558 retry.go:31] will retry after 1.298020016s: waiting for machine to come up
	I0717 19:58:43.875260 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:43.875367 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:43.892010 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:44.376275 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:44.376378 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:44.394725 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:44.875258 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:44.875377 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:44.890500 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.376203 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.376337 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.392119 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.875466 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.875573 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.888488 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.376141 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.376288 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.391072 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.875635 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.875797 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.895087 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.375551 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.375653 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.392620 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.875197 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.875340 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.887934 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:48.375469 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.375588 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.392548 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:44.570404 1102415 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.089433908s)
	I0717 19:58:44.570451 1102415 crio.go:451] Took 3.089562 seconds to extract the tarball
	I0717 19:58:44.570465 1102415 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:58:44.615062 1102415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:58:44.660353 1102415 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:58:44.660385 1102415 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:58:44.660468 1102415 ssh_runner.go:195] Run: crio config
	I0717 19:58:44.726880 1102415 cni.go:84] Creating CNI manager for ""
	I0717 19:58:44.726915 1102415 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:58:44.726946 1102415 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:58:44.726973 1102415 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.51 APIServerPort:8444 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-711413 NodeName:default-k8s-diff-port-711413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:58:44.727207 1102415 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.51
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-711413"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:58:44.727340 1102415 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-711413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-711413 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0717 19:58:44.727430 1102415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:58:44.740398 1102415 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:58:44.740509 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:58:44.751288 1102415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0717 19:58:44.769779 1102415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:58:44.788216 1102415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0717 19:58:44.808085 1102415 ssh_runner.go:195] Run: grep 192.168.72.51	control-plane.minikube.internal$ /etc/hosts
	I0717 19:58:44.812829 1102415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:44.826074 1102415 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413 for IP: 192.168.72.51
	I0717 19:58:44.826123 1102415 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:58:44.826373 1102415 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:58:44.826440 1102415 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:58:44.826543 1102415 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.key
	I0717 19:58:44.826629 1102415 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/apiserver.key.f6db28d6
	I0717 19:58:44.826697 1102415 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/proxy-client.key
	I0717 19:58:44.826855 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:58:44.826902 1102415 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:58:44.826915 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:58:44.826953 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:58:44.826988 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:58:44.827026 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:58:44.827091 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:44.828031 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:58:44.856357 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:58:44.884042 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:58:44.915279 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:58:44.945170 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:58:44.974151 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:58:45.000387 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:58:45.027617 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:58:45.054305 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:58:45.080828 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:58:45.107437 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:58:45.135588 1102415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:58:45.155297 1102415 ssh_runner.go:195] Run: openssl version
	I0717 19:58:45.162096 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:58:45.175077 1102415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:58:45.180966 1102415 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:58:45.181050 1102415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:58:45.187351 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:58:45.199795 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:58:45.214273 1102415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:45.220184 1102415 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:45.220269 1102415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:45.227207 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:58:45.239921 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:58:45.252978 1102415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:58:45.259164 1102415 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:58:45.259257 1102415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:58:45.266134 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:58:45.281302 1102415 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:58:45.287179 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:58:45.294860 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:58:45.302336 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:58:45.309621 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:58:45.316590 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:58:45.323564 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:58:45.330904 1102415 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-711413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port
-711413 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.51 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:58:45.331050 1102415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:58:45.331115 1102415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:45.368522 1102415 cri.go:89] found id: ""
	I0717 19:58:45.368606 1102415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:58:45.380610 1102415 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:58:45.380640 1102415 kubeadm.go:636] restartCluster start
	I0717 19:58:45.380711 1102415 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:58:45.391395 1102415 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.392845 1102415 kubeconfig.go:92] found "default-k8s-diff-port-711413" server: "https://192.168.72.51:8444"
	I0717 19:58:45.395718 1102415 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:58:45.405869 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.405954 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.417987 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.918789 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.918924 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.935620 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.418786 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.418918 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.435879 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.918441 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.918570 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.934753 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.418315 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.418429 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.434411 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.918984 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.919143 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.930556 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:48.418827 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.418915 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.430779 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:48.918288 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.918395 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.929830 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.782651 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:45.853667 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:45.853691 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:45.783094 1103558 retry.go:31] will retry after 2.002921571s: waiting for machine to come up
	I0717 19:58:47.788455 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:47.788965 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:47.788995 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:47.788914 1103558 retry.go:31] will retry after 2.108533646s: waiting for machine to come up
	I0717 19:58:49.899541 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:49.900028 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:49.900073 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:49.899974 1103558 retry.go:31] will retry after 3.529635748s: waiting for machine to come up
	I0717 19:58:48.875708 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.875803 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.893686 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:49.362030 1102136 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:58:49.362079 1102136 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:58:49.362096 1102136 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:58:49.362166 1102136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:49.405900 1102136 cri.go:89] found id: ""
	I0717 19:58:49.405997 1102136 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:58:49.429666 1102136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:58:49.440867 1102136 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:58:49.440938 1102136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:49.454993 1102136 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:49.455023 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:49.606548 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.568083 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.782373 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.895178 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.999236 1102136 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:58:50.999321 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:51.519969 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:52.019769 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:52.519618 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:53.020330 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:53.519378 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:53.549727 1102136 api_server.go:72] duration metric: took 2.550491567s to wait for apiserver process to appear ...
	I0717 19:58:53.549757 1102136 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:58:53.549778 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:49.418724 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:49.418839 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:49.431867 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:49.918433 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:49.918602 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:49.933324 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:50.418991 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:50.419113 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:50.433912 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:50.919128 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:50.919228 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:50.934905 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:51.418418 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:51.418557 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:51.430640 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:51.918136 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:51.918248 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:51.933751 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:52.418277 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:52.418388 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:52.434907 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:52.918570 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:52.918702 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:52.933426 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:53.418734 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:53.418828 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:53.431710 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:53.918381 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:53.918502 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:53.930053 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:53.431544 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:53.432055 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:53.432087 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:53.431995 1103558 retry.go:31] will retry after 3.133721334s: waiting for machine to come up
	I0717 19:58:57.990532 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:58:57.990581 1102136 api_server.go:103] status: https://192.168.61.65:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:58:58.491387 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:58.501594 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:58:58.501636 1102136 api_server.go:103] status: https://192.168.61.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:58:54.418156 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:54.418290 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:54.430262 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:54.918831 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:54.918933 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:54.930380 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:55.406385 1102415 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:58:55.406432 1102415 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:58:55.406451 1102415 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:58:55.406530 1102415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:55.444364 1102415 cri.go:89] found id: ""
	I0717 19:58:55.444447 1102415 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:58:55.460367 1102415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:58:55.472374 1102415 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:58:55.472469 1102415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:55.482078 1102415 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:55.482121 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:55.630428 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.221310 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.460424 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.570707 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.691954 1102415 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:58:56.692053 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:57.209115 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:57.708801 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:58.209204 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:58.709268 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:58.991630 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:58.999253 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:58:58.999295 1102136 api_server.go:103] status: https://192.168.61.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:58:59.491062 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:59.498441 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 200:
	ok
	I0717 19:58:59.514314 1102136 api_server.go:141] control plane version: v1.27.3
	I0717 19:58:59.514353 1102136 api_server.go:131] duration metric: took 5.964587051s to wait for apiserver health ...
	I0717 19:58:59.514368 1102136 cni.go:84] Creating CNI manager for ""
	I0717 19:58:59.514403 1102136 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:58:59.516809 1102136 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:58:56.567585 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:56.568167 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:56.568203 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:56.568069 1103558 retry.go:31] will retry after 4.627498539s: waiting for machine to come up
	I0717 19:58:59.518908 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:58:59.549246 1102136 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:58:59.598652 1102136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:58:59.614418 1102136 system_pods.go:59] 8 kube-system pods found
	I0717 19:58:59.614482 1102136 system_pods.go:61] "coredns-5d78c9869d-9mxdj" [fbff09fd-436d-4208-9187-b6312aa1c223] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:58:59.614506 1102136 system_pods.go:61] "etcd-no-preload-408472" [7125f19d-c1ed-4b1f-99be-207bbf5d8c70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:58:59.614519 1102136 system_pods.go:61] "kube-apiserver-no-preload-408472" [0e54eaed-daad-434a-ad3b-96f7fb924099] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:58:59.614529 1102136 system_pods.go:61] "kube-controller-manager-no-preload-408472" [38ee8079-8142-45a9-8d5a-4abbc5c8bb3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:58:59.614547 1102136 system_pods.go:61] "kube-proxy-cntdn" [8653567b-abf9-468c-a030-45fc53fa0cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 19:58:59.614558 1102136 system_pods.go:61] "kube-scheduler-no-preload-408472" [e51560a1-c1b0-407c-8635-df512bd033b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:58:59.614575 1102136 system_pods.go:61] "metrics-server-74d5c6b9c-hnngh" [dfff837e-dbba-4795-935d-9562d2744169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:58:59.614637 1102136 system_pods.go:61] "storage-provisioner" [1aefd8ef-dec9-4e37-8648-8e5a62622cd3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 19:58:59.614658 1102136 system_pods.go:74] duration metric: took 15.975122ms to wait for pod list to return data ...
	I0717 19:58:59.614669 1102136 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:58:59.621132 1102136 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:58:59.621181 1102136 node_conditions.go:123] node cpu capacity is 2
	I0717 19:58:59.621197 1102136 node_conditions.go:105] duration metric: took 6.519635ms to run NodePressure ...
	I0717 19:58:59.621224 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:59.909662 1102136 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:58:59.915153 1102136 kubeadm.go:787] kubelet initialised
	I0717 19:58:59.915190 1102136 kubeadm.go:788] duration metric: took 5.491139ms waiting for restarted kubelet to initialise ...
	I0717 19:58:59.915201 1102136 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:58:59.925196 1102136 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	I0717 19:58:59.934681 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.934715 1102136 pod_ready.go:81] duration metric: took 9.478384ms waiting for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	E0717 19:58:59.934728 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.934742 1102136 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:58:59.949704 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "etcd-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.949744 1102136 pod_ready.go:81] duration metric: took 14.992167ms waiting for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:58:59.949757 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "etcd-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.949766 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:58:59.958029 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-apiserver-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.958083 1102136 pod_ready.go:81] duration metric: took 8.306713ms waiting for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:58:59.958096 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-apiserver-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.958110 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:00.003638 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.003689 1102136 pod_ready.go:81] duration metric: took 45.565817ms waiting for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:00.003702 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.003714 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:00.403384 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-proxy-cntdn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.403421 1102136 pod_ready.go:81] duration metric: took 399.697327ms waiting for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:00.403431 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-proxy-cntdn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.403440 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:00.803159 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-scheduler-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.803192 1102136 pod_ready.go:81] duration metric: took 399.744356ms waiting for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:00.803205 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-scheduler-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.803217 1102136 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:01.206222 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:01.206247 1102136 pod_ready.go:81] duration metric: took 403.0216ms waiting for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:01.206256 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:01.206271 1102136 pod_ready.go:38] duration metric: took 1.291054316s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:01.206293 1102136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:59:01.225481 1102136 ops.go:34] apiserver oom_adj: -16
	I0717 19:59:01.225516 1102136 kubeadm.go:640] restartCluster took 21.895234291s
	I0717 19:59:01.225528 1102136 kubeadm.go:406] StartCluster complete in 21.948029137s
	I0717 19:59:01.225551 1102136 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:01.225672 1102136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:59:01.228531 1102136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:01.228913 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:59:01.229088 1102136 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:59:01.229192 1102136 config.go:182] Loaded profile config "no-preload-408472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:59:01.229244 1102136 addons.go:69] Setting metrics-server=true in profile "no-preload-408472"
	I0717 19:59:01.229249 1102136 addons.go:69] Setting default-storageclass=true in profile "no-preload-408472"
	I0717 19:59:01.229280 1102136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-408472"
	I0717 19:59:01.229299 1102136 addons.go:231] Setting addon metrics-server=true in "no-preload-408472"
	W0717 19:59:01.229307 1102136 addons.go:240] addon metrics-server should already be in state true
	I0717 19:59:01.229241 1102136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-408472"
	I0717 19:59:01.229353 1102136 addons.go:231] Setting addon storage-provisioner=true in "no-preload-408472"
	W0717 19:59:01.229366 1102136 addons.go:240] addon storage-provisioner should already be in state true
	I0717 19:59:01.229440 1102136 host.go:66] Checking if "no-preload-408472" exists ...
	I0717 19:59:01.229447 1102136 host.go:66] Checking if "no-preload-408472" exists ...
	I0717 19:59:01.229764 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.229804 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.229833 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.229854 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.229897 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.229943 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.235540 1102136 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-408472" context rescaled to 1 replicas
	I0717 19:59:01.235641 1102136 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.65 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:59:01.239320 1102136 out.go:177] * Verifying Kubernetes components...
	I0717 19:59:01.241167 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:59:01.247222 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0717 19:59:01.247751 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.248409 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.248438 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.248825 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.249141 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.249823 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
	I0717 19:59:01.249829 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34569
	I0717 19:59:01.250716 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.250724 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.251383 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.251409 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.251591 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.251612 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.252011 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.252078 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.252646 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.252679 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.252688 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.252700 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.270584 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I0717 19:59:01.270664 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40173
	I0717 19:59:01.271057 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.271170 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.271634 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.271656 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.271782 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.271807 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.272018 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.272158 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.272237 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.272362 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.274521 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:59:01.274525 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:59:01.277458 1102136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:59:01.279611 1102136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:02.603147 1101908 start.go:369] acquired machines lock for "old-k8s-version-149000" in 1m3.679538618s
	I0717 19:59:02.603207 1101908 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:59:02.603219 1101908 fix.go:54] fixHost starting: 
	I0717 19:59:02.603691 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:02.603736 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:02.625522 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46275
	I0717 19:59:02.626230 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:02.626836 1101908 main.go:141] libmachine: Using API Version  1
	I0717 19:59:02.626876 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:02.627223 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:02.627395 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:02.627513 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 19:59:02.629627 1101908 fix.go:102] recreateIfNeeded on old-k8s-version-149000: state=Stopped err=<nil>
	I0717 19:59:02.629669 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	W0717 19:59:02.629894 1101908 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:59:02.632584 1101908 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-149000" ...
	I0717 19:59:01.279643 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:59:01.281507 1102136 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:01.281513 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:59:01.281520 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:59:01.281545 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:59:01.281545 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:59:01.286403 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.286708 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.286766 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:59:01.286801 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.287001 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:59:01.287264 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:59:01.287523 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:59:01.287525 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:59:01.287606 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.287736 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:59:01.287791 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:59:01.288610 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:59:01.288821 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:59:01.288982 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:59:01.291242 1102136 addons.go:231] Setting addon default-storageclass=true in "no-preload-408472"
	W0717 19:59:01.291259 1102136 addons.go:240] addon default-storageclass should already be in state true
	I0717 19:59:01.291287 1102136 host.go:66] Checking if "no-preload-408472" exists ...
	I0717 19:59:01.291542 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.291569 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.309690 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43629
	I0717 19:59:01.310234 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.310915 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.310944 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.311356 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.311903 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.311953 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.350859 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0717 19:59:01.351342 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.351922 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.351950 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.352334 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.352512 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.354421 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:59:01.354815 1102136 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:01.354832 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:59:01.354853 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:59:01.358180 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.358632 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:59:01.358651 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.358833 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:59:01.359049 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:59:01.359435 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:59:01.359582 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:59:01.510575 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:59:01.510598 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:59:01.534331 1102136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:01.545224 1102136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:01.582904 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:59:01.582945 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:59:01.645312 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:01.645342 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:59:01.715240 1102136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:01.746252 1102136 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:59:01.746249 1102136 node_ready.go:35] waiting up to 6m0s for node "no-preload-408472" to be "Ready" ...
	I0717 19:58:59.208473 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:59.241367 1102415 api_server.go:72] duration metric: took 2.549409381s to wait for apiserver process to appear ...
	I0717 19:58:59.241403 1102415 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:58:59.241432 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:03.909722 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:03.909763 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:03.702857 1102136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.168474279s)
	I0717 19:59:03.702921 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:03.702938 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:03.703307 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:03.703331 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:03.703343 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:03.703353 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:03.703705 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:03.703735 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:03.703753 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:03.703766 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:03.705061 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Closing plugin on server side
	I0717 19:59:03.705164 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:03.705187 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:03.793171 1102136 node_ready.go:58] node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:04.294821 1102136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.749544143s)
	I0717 19:59:04.294904 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.294922 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.295362 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.295380 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.295391 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.295403 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.295470 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Closing plugin on server side
	I0717 19:59:04.295674 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.295703 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.349340 1102136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.634046821s)
	I0717 19:59:04.349410 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.349428 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.349817 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.349837 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.349848 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.349858 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.349864 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Closing plugin on server side
	I0717 19:59:04.350081 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.350097 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.350116 1102136 addons.go:467] Verifying addon metrics-server=true in "no-preload-408472"
	I0717 19:59:04.353040 1102136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 19:59:01.198818 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.199367 1103141 main.go:141] libmachine: (embed-certs-114855) Found IP for machine: 192.168.39.213
	I0717 19:59:01.199394 1103141 main.go:141] libmachine: (embed-certs-114855) Reserving static IP address...
	I0717 19:59:01.199415 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has current primary IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.199879 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "embed-certs-114855", mac: "52:54:00:d6:57:9a", ip: "192.168.39.213"} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.199916 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | skip adding static IP to network mk-embed-certs-114855 - found existing host DHCP lease matching {name: "embed-certs-114855", mac: "52:54:00:d6:57:9a", ip: "192.168.39.213"}
	I0717 19:59:01.199934 1103141 main.go:141] libmachine: (embed-certs-114855) Reserved static IP address: 192.168.39.213
	I0717 19:59:01.199952 1103141 main.go:141] libmachine: (embed-certs-114855) Waiting for SSH to be available...
	I0717 19:59:01.199960 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Getting to WaitForSSH function...
	I0717 19:59:01.202401 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.202876 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.202910 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.203075 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Using SSH client type: external
	I0717 19:59:01.203121 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa (-rw-------)
	I0717 19:59:01.203172 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:59:01.203195 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | About to run SSH command:
	I0717 19:59:01.203208 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | exit 0
	I0717 19:59:01.298366 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | SSH cmd err, output: <nil>: 
	I0717 19:59:01.298876 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetConfigRaw
	I0717 19:59:01.299753 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:01.303356 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.304237 1103141 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/config.json ...
	I0717 19:59:01.304526 1103141 machine.go:88] provisioning docker machine ...
	I0717 19:59:01.304569 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:01.304668 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.304694 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.304847 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetMachineName
	I0717 19:59:01.305079 1103141 buildroot.go:166] provisioning hostname "embed-certs-114855"
	I0717 19:59:01.305103 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetMachineName
	I0717 19:59:01.305324 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.308214 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.308591 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.308630 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.308805 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.309016 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.309195 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.309346 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.309591 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:01.310205 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:01.310227 1103141 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-114855 && echo "embed-certs-114855" | sudo tee /etc/hostname
	I0717 19:59:01.453113 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-114855
	
	I0717 19:59:01.453149 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.456502 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.456918 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.456981 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.457107 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.457291 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.457514 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.457711 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.457923 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:01.458567 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:01.458597 1103141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-114855' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-114855/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-114855' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:59:01.599062 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:59:01.599112 1103141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:59:01.599143 1103141 buildroot.go:174] setting up certificates
	I0717 19:59:01.599161 1103141 provision.go:83] configureAuth start
	I0717 19:59:01.599194 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetMachineName
	I0717 19:59:01.599579 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:01.602649 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.603014 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.603050 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.603218 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.606042 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.606485 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.606531 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.606679 1103141 provision.go:138] copyHostCerts
	I0717 19:59:01.606754 1103141 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:59:01.606767 1103141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:59:01.606839 1103141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:59:01.607009 1103141 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:59:01.607025 1103141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:59:01.607061 1103141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:59:01.607158 1103141 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:59:01.607174 1103141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:59:01.607204 1103141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:59:01.607298 1103141 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.embed-certs-114855 san=[192.168.39.213 192.168.39.213 localhost 127.0.0.1 minikube embed-certs-114855]
	I0717 19:59:01.721082 1103141 provision.go:172] copyRemoteCerts
	I0717 19:59:01.721179 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:59:01.721223 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.724636 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.725093 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.725127 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.725418 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.725708 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.725896 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.726056 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 19:59:01.826710 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 19:59:01.861153 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:59:01.889779 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:59:01.919893 1103141 provision.go:86] duration metric: configureAuth took 320.712718ms
	I0717 19:59:01.919929 1103141 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:59:01.920192 1103141 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:59:01.920283 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.923585 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.926174 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.926264 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.926897 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.927167 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.927365 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.927512 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.927712 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:01.928326 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:01.928350 1103141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:59:02.302372 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:59:02.302427 1103141 machine.go:91] provisioned docker machine in 997.853301ms
	I0717 19:59:02.302441 1103141 start.go:300] post-start starting for "embed-certs-114855" (driver="kvm2")
	I0717 19:59:02.302455 1103141 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:59:02.302487 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.302859 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:59:02.302900 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.305978 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.306544 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.306626 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.306769 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.306996 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.307231 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.307403 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 19:59:02.408835 1103141 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:59:02.415119 1103141 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:59:02.415157 1103141 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:59:02.415256 1103141 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:59:02.415444 1103141 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:59:02.415570 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:59:02.430800 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:02.465311 1103141 start.go:303] post-start completed in 162.851156ms
	I0717 19:59:02.465347 1103141 fix.go:56] fixHost completed within 24.594172049s
	I0717 19:59:02.465375 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.468945 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.469406 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.469443 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.469704 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.469972 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.470166 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.470301 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.470501 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:02.471120 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:02.471159 1103141 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:59:02.602921 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623942.546317761
	
	I0717 19:59:02.602957 1103141 fix.go:206] guest clock: 1689623942.546317761
	I0717 19:59:02.602970 1103141 fix.go:219] Guest: 2023-07-17 19:59:02.546317761 +0000 UTC Remote: 2023-07-17 19:59:02.465351333 +0000 UTC m=+106.772168927 (delta=80.966428ms)
	I0717 19:59:02.603036 1103141 fix.go:190] guest clock delta is within tolerance: 80.966428ms
	I0717 19:59:02.603053 1103141 start.go:83] releasing machines lock for "embed-certs-114855", held for 24.731922082s
	I0717 19:59:02.604022 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.604447 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:02.608397 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.608991 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.609030 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.609308 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.610102 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.610386 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.610634 1103141 ssh_runner.go:195] Run: cat /version.json
	I0717 19:59:02.610677 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.611009 1103141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:59:02.611106 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.614739 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.615121 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.615253 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.616278 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.616386 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.616802 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.616829 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.617030 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.617096 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.617395 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.617442 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.617597 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.617826 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 19:59:02.618522 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	W0717 19:59:02.745192 1103141 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:59:02.745275 1103141 ssh_runner.go:195] Run: systemctl --version
	I0717 19:59:02.752196 1103141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:59:02.903288 1103141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:59:02.911818 1103141 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:59:02.911917 1103141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:59:02.933786 1103141 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:59:02.933883 1103141 start.go:469] detecting cgroup driver to use...
	I0717 19:59:02.934004 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:59:02.955263 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:59:02.974997 1103141 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:59:02.975077 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:59:02.994203 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:59:03.014446 1103141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:59:03.198307 1103141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:59:03.397392 1103141 docker.go:212] disabling docker service ...
	I0717 19:59:03.397591 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:59:03.418509 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:59:03.437373 1103141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:59:03.613508 1103141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:59:03.739647 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:59:03.754406 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:59:03.777929 1103141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:59:03.778091 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.790606 1103141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:59:03.790721 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.804187 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.817347 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.828813 1103141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:59:03.840430 1103141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:59:03.850240 1103141 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:59:03.850319 1103141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:59:03.865894 1103141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:59:03.882258 1103141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:59:04.017800 1103141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:59:04.248761 1103141 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:59:04.248842 1103141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:59:04.257893 1103141 start.go:537] Will wait 60s for crictl version
	I0717 19:59:04.257984 1103141 ssh_runner.go:195] Run: which crictl
	I0717 19:59:04.264221 1103141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:59:04.305766 1103141 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:59:04.305851 1103141 ssh_runner.go:195] Run: crio --version
	I0717 19:59:04.375479 1103141 ssh_runner.go:195] Run: crio --version
	I0717 19:59:04.436461 1103141 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:59:04.438378 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:04.442194 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:04.442754 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:04.442792 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:04.443221 1103141 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:59:04.448534 1103141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:04.465868 1103141 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:59:04.465946 1103141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:04.502130 1103141 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:59:04.502219 1103141 ssh_runner.go:195] Run: which lz4
	I0717 19:59:04.507394 1103141 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:59:04.512404 1103141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:59:04.512452 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 19:59:04.409929 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:04.419102 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:04.419138 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:04.910761 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:04.919844 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:04.919898 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:05.410298 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:05.424961 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:05.425002 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:05.910377 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:05.924698 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 200:
	ok
	I0717 19:59:05.949272 1102415 api_server.go:141] control plane version: v1.27.3
	I0717 19:59:05.949308 1102415 api_server.go:131] duration metric: took 6.707896837s to wait for apiserver health ...
	I0717 19:59:05.949321 1102415 cni.go:84] Creating CNI manager for ""
	I0717 19:59:05.949334 1102415 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:05.952250 1102415 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:59:02.634580 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Start
	I0717 19:59:02.635005 1101908 main.go:141] libmachine: (old-k8s-version-149000) Ensuring networks are active...
	I0717 19:59:02.635919 1101908 main.go:141] libmachine: (old-k8s-version-149000) Ensuring network default is active
	I0717 19:59:02.636328 1101908 main.go:141] libmachine: (old-k8s-version-149000) Ensuring network mk-old-k8s-version-149000 is active
	I0717 19:59:02.637168 1101908 main.go:141] libmachine: (old-k8s-version-149000) Getting domain xml...
	I0717 19:59:02.638177 1101908 main.go:141] libmachine: (old-k8s-version-149000) Creating domain...
	I0717 19:59:04.249328 1101908 main.go:141] libmachine: (old-k8s-version-149000) Waiting to get IP...
	I0717 19:59:04.250286 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:04.250925 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:04.251047 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:04.250909 1103733 retry.go:31] will retry after 305.194032ms: waiting for machine to come up
	I0717 19:59:04.558456 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:04.559354 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:04.559387 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:04.559290 1103733 retry.go:31] will retry after 338.882261ms: waiting for machine to come up
	I0717 19:59:04.900152 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:04.900673 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:04.900700 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:04.900616 1103733 retry.go:31] will retry after 334.664525ms: waiting for machine to come up
	I0717 19:59:05.236557 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:05.237252 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:05.237280 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:05.237121 1103733 retry.go:31] will retry after 410.314805ms: waiting for machine to come up
	I0717 19:59:05.648936 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:05.649630 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:05.649665 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:05.649572 1103733 retry.go:31] will retry after 482.724985ms: waiting for machine to come up
	I0717 19:59:06.135159 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:06.135923 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:06.135961 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:06.135851 1103733 retry.go:31] will retry after 646.078047ms: waiting for machine to come up
	I0717 19:59:06.783788 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:06.784327 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:06.784352 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:06.784239 1103733 retry.go:31] will retry after 1.176519187s: waiting for machine to come up
	I0717 19:59:05.954319 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:59:06.005185 1102415 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:59:06.070856 1102415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:59:06.086358 1102415 system_pods.go:59] 8 kube-system pods found
	I0717 19:59:06.086429 1102415 system_pods.go:61] "coredns-5d78c9869d-rjqsv" [f27e2de9-9849-40e2-b6dc-1ee27537b1e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:59:06.086448 1102415 system_pods.go:61] "etcd-default-k8s-diff-port-711413" [15f74e3f-61a1-4464-bbff-d336f6df4b6e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:59:06.086462 1102415 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-711413" [c164ab32-6a1d-4079-9b58-7da96eabc60e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:59:06.086481 1102415 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-711413" [7dc149c8-be03-4fd6-b945-c63e95a28470] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:59:06.086498 1102415 system_pods.go:61] "kube-proxy-9qfpg" [ecb84bb9-57a2-4a42-8104-b792d38479ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 19:59:06.086513 1102415 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-711413" [f2cb374f-674e-4a08-82e0-0932b732b485] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:59:06.086526 1102415 system_pods.go:61] "metrics-server-74d5c6b9c-hzcd7" [17e01399-9910-4f01-abe7-3eae271af1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:59:06.086536 1102415 system_pods.go:61] "storage-provisioner" [43705714-97f1-4b06-8eeb-04d60c22112a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 19:59:06.086546 1102415 system_pods.go:74] duration metric: took 15.663084ms to wait for pod list to return data ...
	I0717 19:59:06.086556 1102415 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:59:06.113146 1102415 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:59:06.113186 1102415 node_conditions.go:123] node cpu capacity is 2
	I0717 19:59:06.113203 1102415 node_conditions.go:105] duration metric: took 26.64051ms to run NodePressure ...
	I0717 19:59:06.113228 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:06.757768 1102415 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:59:06.770030 1102415 kubeadm.go:787] kubelet initialised
	I0717 19:59:06.770064 1102415 kubeadm.go:788] duration metric: took 12.262867ms waiting for restarted kubelet to initialise ...
	I0717 19:59:06.770077 1102415 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:06.782569 1102415 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.794688 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.794714 1102415 pod_ready.go:81] duration metric: took 12.110858ms waiting for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.794723 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.794732 1102415 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.812213 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.812265 1102415 pod_ready.go:81] duration metric: took 17.522572ms waiting for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.812281 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.812291 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.838241 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.838291 1102415 pod_ready.go:81] duration metric: took 25.986333ms waiting for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.838306 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.838318 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.869011 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.869127 1102415 pod_ready.go:81] duration metric: took 30.791681ms waiting for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.869155 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.869192 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:07.164422 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-proxy-9qfpg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.164521 1102415 pod_ready.go:81] duration metric: took 295.308967ms waiting for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:07.164549 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-proxy-9qfpg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.164570 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:07.571331 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.571370 1102415 pod_ready.go:81] duration metric: took 406.779012ms waiting for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:07.571383 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.571393 1102415 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:07.967699 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.967740 1102415 pod_ready.go:81] duration metric: took 396.334567ms waiting for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:07.967757 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.967770 1102415 pod_ready.go:38] duration metric: took 1.197678353s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:07.967793 1102415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:59:08.014470 1102415 ops.go:34] apiserver oom_adj: -16
	I0717 19:59:08.014500 1102415 kubeadm.go:640] restartCluster took 22.633851106s
	I0717 19:59:08.014510 1102415 kubeadm.go:406] StartCluster complete in 22.683627985s
	I0717 19:59:08.014534 1102415 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:08.014622 1102415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:59:08.017393 1102415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:08.018018 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:59:08.018126 1102415 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:59:08.018273 1102415 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-711413"
	I0717 19:59:08.018300 1102415 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-711413"
	W0717 19:59:08.018309 1102415 addons.go:240] addon storage-provisioner should already be in state true
	I0717 19:59:08.018404 1102415 host.go:66] Checking if "default-k8s-diff-port-711413" exists ...
	I0717 19:59:08.018400 1102415 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-711413"
	I0717 19:59:08.018457 1102415 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-711413"
	W0717 19:59:08.018471 1102415 addons.go:240] addon metrics-server should already be in state true
	I0717 19:59:08.018538 1102415 host.go:66] Checking if "default-k8s-diff-port-711413" exists ...
	I0717 19:59:08.018864 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.018916 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.018950 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.018997 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.019087 1102415 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-711413"
	I0717 19:59:08.019108 1102415 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-711413"
	I0717 19:59:08.019378 1102415 config.go:182] Loaded profile config "default-k8s-diff-port-711413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:59:08.019724 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.019823 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.028311 1102415 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-711413" context rescaled to 1 replicas
	I0717 19:59:08.028363 1102415 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.51 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:59:08.031275 1102415 out.go:177] * Verifying Kubernetes components...
	I0717 19:59:08.033186 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:59:08.041793 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43183
	I0717 19:59:08.041831 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0717 19:59:08.042056 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0717 19:59:08.042525 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.042709 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.043195 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.043373 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.043382 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.043479 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.043486 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.043911 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.044078 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.044095 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.044514 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.044542 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.044773 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.044878 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.045003 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.045373 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.045401 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.065715 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0717 19:59:08.066371 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.067102 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.067128 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.067662 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.067824 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0717 19:59:08.068091 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.069488 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.070144 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.070163 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.070232 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:59:08.070672 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.070852 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.072648 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:59:08.075752 1102415 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:59:08.077844 1102415 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:04.355036 1102136 addons.go:502] enable addons completed in 3.125961318s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 19:59:06.268158 1102136 node_ready.go:58] node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:08.079803 1102415 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:08.079826 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:59:08.079857 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:59:08.077802 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:59:08.079941 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:59:08.079958 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:59:08.078604 1102415 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-711413"
	W0717 19:59:08.080010 1102415 addons.go:240] addon default-storageclass should already be in state true
	I0717 19:59:08.080048 1102415 host.go:66] Checking if "default-k8s-diff-port-711413" exists ...
	I0717 19:59:08.080446 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.080498 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.084746 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.084836 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.085468 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:59:08.085502 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:59:08.085512 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.085534 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.085599 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:59:08.085738 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:59:08.085851 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:59:08.085998 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:59:08.086028 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:59:08.086182 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:59:08.086221 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:59:08.086298 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:59:08.103113 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41455
	I0717 19:59:08.103751 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.104389 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.104412 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.104985 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.105805 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.105846 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.127906 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I0717 19:59:08.129757 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.130713 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.130734 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.131175 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.133060 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.135496 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:59:08.135824 1102415 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:08.135840 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:59:08.135860 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:59:08.139031 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.139443 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:59:08.139480 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.139855 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:59:08.140455 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:59:08.140850 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:59:08.141145 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:59:08.260742 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:59:08.260779 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:59:08.310084 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:59:08.310123 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:59:08.315228 1102415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:08.333112 1102415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:08.347265 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:08.347297 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:59:08.446018 1102415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:08.602418 1102415 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:59:08.602481 1102415 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-711413" to be "Ready" ...
	I0717 19:59:06.789410 1103141 crio.go:444] Took 2.282067 seconds to copy over tarball
	I0717 19:59:06.789500 1103141 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:59:10.614919 1103141 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.825382729s)
	I0717 19:59:10.614956 1103141 crio.go:451] Took 3.825512 seconds to extract the tarball
	I0717 19:59:10.614970 1103141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:59:10.668773 1103141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:10.721815 1103141 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:59:10.721849 1103141 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:59:10.721928 1103141 ssh_runner.go:195] Run: crio config
	I0717 19:59:10.626470 1102415 node_ready.go:58] node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:11.522603 1102415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.189445026s)
	I0717 19:59:11.522668 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.522681 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.522703 1102415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.207433491s)
	I0717 19:59:11.522747 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.522762 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.523183 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.523208 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.523223 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.523234 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.523247 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.523700 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.523717 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.523768 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.525232 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.525259 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.525269 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.525278 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.526823 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.526841 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.526864 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.526878 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.526889 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.527158 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.527174 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.527190 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.546758 1102415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.100689574s)
	I0717 19:59:11.546840 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.546856 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.548817 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.548900 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.548920 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.548946 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.548966 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.549341 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.549360 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.549374 1102415 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-711413"
	I0717 19:59:11.549385 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.629748 1102415 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:59:07.962879 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:07.963461 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:07.963494 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:07.963408 1103733 retry.go:31] will retry after 1.458776494s: waiting for machine to come up
	I0717 19:59:09.423815 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:09.424545 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:09.424578 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:09.424434 1103733 retry.go:31] will retry after 1.505416741s: waiting for machine to come up
	I0717 19:59:10.932450 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:10.932970 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:10.932999 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:10.932907 1103733 retry.go:31] will retry after 2.119238731s: waiting for machine to come up
	I0717 19:59:08.762965 1102136 node_ready.go:49] node "no-preload-408472" has status "Ready":"True"
	I0717 19:59:08.762999 1102136 node_ready.go:38] duration metric: took 7.016711148s waiting for node "no-preload-408472" to be "Ready" ...
	I0717 19:59:08.763010 1102136 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:08.770929 1102136 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.781876 1102136 pod_ready.go:92] pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:08.781916 1102136 pod_ready.go:81] duration metric: took 10.948677ms waiting for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.781931 1102136 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.790806 1102136 pod_ready.go:92] pod "etcd-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:08.790842 1102136 pod_ready.go:81] duration metric: took 8.902354ms waiting for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.790858 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:11.107348 1102136 pod_ready.go:102] pod "kube-apiserver-no-preload-408472" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:12.306923 1102136 pod_ready.go:92] pod "kube-apiserver-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.306956 1102136 pod_ready.go:81] duration metric: took 3.516087536s waiting for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.306971 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.314504 1102136 pod_ready.go:92] pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.314541 1102136 pod_ready.go:81] duration metric: took 7.560269ms waiting for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.314557 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.323200 1102136 pod_ready.go:92] pod "kube-proxy-cntdn" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.323232 1102136 pod_ready.go:81] duration metric: took 8.667115ms waiting for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.323246 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.367453 1102136 pod_ready.go:92] pod "kube-scheduler-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.367483 1102136 pod_ready.go:81] duration metric: took 44.229894ms waiting for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.367494 1102136 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:11.776332 1102415 addons.go:502] enable addons completed in 3.758222459s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:59:13.118285 1102415 node_ready.go:58] node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:10.806964 1103141 cni.go:84] Creating CNI manager for ""
	I0717 19:59:10.907820 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:10.908604 1103141 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:59:10.908671 1103141 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-114855 NodeName:embed-certs-114855 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:59:10.909456 1103141 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-114855"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:59:10.909661 1103141 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-114855 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-114855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:59:10.909757 1103141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:59:10.933995 1103141 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:59:10.934116 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:59:10.949424 1103141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0717 19:59:10.971981 1103141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:59:10.995942 1103141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0717 19:59:11.021147 1103141 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I0717 19:59:11.027824 1103141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:11.046452 1103141 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855 for IP: 192.168.39.213
	I0717 19:59:11.046507 1103141 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:11.046722 1103141 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:59:11.046792 1103141 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:59:11.046890 1103141 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/client.key
	I0717 19:59:11.046974 1103141 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/apiserver.key.af9d86f2
	I0717 19:59:11.047032 1103141 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/proxy-client.key
	I0717 19:59:11.047198 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:59:11.047246 1103141 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:59:11.047262 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:59:11.047297 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:59:11.047330 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:59:11.047360 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:59:11.047422 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:11.048308 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:59:11.076826 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:59:11.116981 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:59:11.152433 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:59:11.186124 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:59:11.219052 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:59:11.251034 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:59:11.281026 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:59:11.314219 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:59:11.341636 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:59:11.372920 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:59:11.403343 1103141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:59:11.428094 1103141 ssh_runner.go:195] Run: openssl version
	I0717 19:59:11.435909 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:59:11.455770 1103141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:11.463749 1103141 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:11.463851 1103141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:11.473784 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:59:11.490867 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:59:11.507494 1103141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:59:11.514644 1103141 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:59:11.514746 1103141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:59:11.523975 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:59:11.539528 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:59:11.552649 1103141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:59:11.559671 1103141 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:59:11.559757 1103141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:59:11.569190 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:59:11.584473 1103141 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:59:11.590453 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:59:11.599427 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:59:11.607503 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:59:11.619641 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:59:11.627914 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:59:11.636600 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:59:11.645829 1103141 kubeadm.go:404] StartCluster: {Name:embed-certs-114855 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-114855 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:59:11.645960 1103141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:59:11.646049 1103141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:11.704959 1103141 cri.go:89] found id: ""
	I0717 19:59:11.705078 1103141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:59:11.720588 1103141 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:59:11.720621 1103141 kubeadm.go:636] restartCluster start
	I0717 19:59:11.720697 1103141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:59:11.734693 1103141 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:11.736236 1103141 kubeconfig.go:92] found "embed-certs-114855" server: "https://192.168.39.213:8443"
	I0717 19:59:11.739060 1103141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:59:11.752975 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:11.753096 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:11.766287 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:12.266751 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:12.266867 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:12.281077 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:12.766565 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:12.766669 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:12.780460 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:13.267185 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:13.267305 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:13.286250 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:13.766474 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:13.766582 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:13.780973 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:14.266474 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:14.266565 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:14.283412 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:14.766783 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:14.766885 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:14.782291 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:15.266607 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:15.266721 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:15.279993 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:13.054320 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:13.054787 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:13.054821 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:13.054724 1103733 retry.go:31] will retry after 2.539531721s: waiting for machine to come up
	I0717 19:59:15.597641 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:15.598199 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:15.598235 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:15.598132 1103733 retry.go:31] will retry after 3.376944775s: waiting for machine to come up
	I0717 19:59:14.773506 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:16.778529 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:14.611538 1102415 node_ready.go:49] node "default-k8s-diff-port-711413" has status "Ready":"True"
	I0717 19:59:14.611573 1102415 node_ready.go:38] duration metric: took 6.009046151s waiting for node "default-k8s-diff-port-711413" to be "Ready" ...
	I0717 19:59:14.611583 1102415 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:14.620522 1102415 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.629345 1102415 pod_ready.go:92] pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:14.629380 1102415 pod_ready.go:81] duration metric: took 8.831579ms waiting for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.629394 1102415 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.636756 1102415 pod_ready.go:92] pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:14.636781 1102415 pod_ready.go:81] duration metric: took 7.379506ms waiting for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.636791 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.658668 1102415 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:16.658699 1102415 pod_ready.go:81] duration metric: took 2.021899463s waiting for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.658715 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.667666 1102415 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:16.667695 1102415 pod_ready.go:81] duration metric: took 8.971091ms waiting for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.667709 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.677402 1102415 pod_ready.go:92] pod "kube-proxy-9qfpg" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:16.677433 1102415 pod_ready.go:81] duration metric: took 9.71529ms waiting for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.677448 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:17.011304 1102415 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:17.011332 1102415 pod_ready.go:81] duration metric: took 333.876392ms waiting for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:17.011344 1102415 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:15.766793 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:15.766913 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:15.780587 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:16.266363 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:16.266491 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:16.281228 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:16.766575 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:16.766690 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:16.782127 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:17.266511 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:17.266610 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:17.282119 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:17.766652 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:17.766758 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:17.783972 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:18.266759 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:18.266855 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:18.284378 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:18.766574 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:18.766675 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:18.782934 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:19.266475 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:19.266577 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:19.280895 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:19.767307 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:19.767411 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:19.781007 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:20.266522 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:20.266646 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:20.280722 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:18.976814 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:18.977300 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:18.977326 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:18.977254 1103733 retry.go:31] will retry after 2.728703676s: waiting for machine to come up
	I0717 19:59:21.709422 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:21.709889 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:21.709916 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:21.709841 1103733 retry.go:31] will retry after 5.373130791s: waiting for machine to come up
	I0717 19:59:19.273610 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:21.274431 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:19.419889 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:21.422395 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:23.423974 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:20.767398 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:20.767505 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:20.780641 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:21.266963 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:21.267053 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:21.280185 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:21.753855 1103141 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:59:21.753890 1103141 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:59:21.753905 1103141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:59:21.753969 1103141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:21.792189 1103141 cri.go:89] found id: ""
	I0717 19:59:21.792276 1103141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:59:21.809670 1103141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:59:21.820341 1103141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:59:21.820408 1103141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:21.830164 1103141 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:21.830194 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:21.961988 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:22.788248 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:23.013910 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:23.110334 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:23.204343 1103141 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:59:23.204448 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:23.721708 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:24.222046 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:24.721482 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:25.221523 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:25.721720 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:23.773347 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:26.275805 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:25.424115 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:27.920288 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:27.084831 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.085274 1101908 main.go:141] libmachine: (old-k8s-version-149000) Found IP for machine: 192.168.50.177
	I0717 19:59:27.085299 1101908 main.go:141] libmachine: (old-k8s-version-149000) Reserving static IP address...
	I0717 19:59:27.085332 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has current primary IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.085757 1101908 main.go:141] libmachine: (old-k8s-version-149000) Reserved static IP address: 192.168.50.177
	I0717 19:59:27.085799 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "old-k8s-version-149000", mac: "52:54:00:88:d8:03", ip: "192.168.50.177"} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.085821 1101908 main.go:141] libmachine: (old-k8s-version-149000) Waiting for SSH to be available...
	I0717 19:59:27.085855 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | skip adding static IP to network mk-old-k8s-version-149000 - found existing host DHCP lease matching {name: "old-k8s-version-149000", mac: "52:54:00:88:d8:03", ip: "192.168.50.177"}
	I0717 19:59:27.085880 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Getting to WaitForSSH function...
	I0717 19:59:27.088245 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.088569 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.088605 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.088777 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Using SSH client type: external
	I0717 19:59:27.088809 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa (-rw-------)
	I0717 19:59:27.088850 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:59:27.088866 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | About to run SSH command:
	I0717 19:59:27.088877 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | exit 0
	I0717 19:59:27.186039 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | SSH cmd err, output: <nil>: 
	I0717 19:59:27.186549 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetConfigRaw
	I0717 19:59:27.187427 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:27.190317 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.190738 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.190781 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.191089 1101908 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/config.json ...
	I0717 19:59:27.191343 1101908 machine.go:88] provisioning docker machine ...
	I0717 19:59:27.191369 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:27.191637 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetMachineName
	I0717 19:59:27.191875 1101908 buildroot.go:166] provisioning hostname "old-k8s-version-149000"
	I0717 19:59:27.191902 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetMachineName
	I0717 19:59:27.192058 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.194710 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.195141 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.195190 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.195472 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.195752 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.195938 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.196104 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.196308 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:27.196731 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:27.196746 1101908 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-149000 && echo "old-k8s-version-149000" | sudo tee /etc/hostname
	I0717 19:59:27.338648 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-149000
	
	I0717 19:59:27.338712 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.341719 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.342138 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.342176 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.342392 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.342666 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.342879 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.343036 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.343216 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:27.343733 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:27.343763 1101908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-149000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-149000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-149000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:59:27.478006 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:59:27.478054 1101908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:59:27.478109 1101908 buildroot.go:174] setting up certificates
	I0717 19:59:27.478130 1101908 provision.go:83] configureAuth start
	I0717 19:59:27.478150 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetMachineName
	I0717 19:59:27.478485 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:27.481425 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.481865 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.481900 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.482029 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.484825 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.485290 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.485326 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.485505 1101908 provision.go:138] copyHostCerts
	I0717 19:59:27.485604 1101908 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:59:27.485633 1101908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:59:27.485709 1101908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:59:27.485837 1101908 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:59:27.485849 1101908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:59:27.485879 1101908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:59:27.485957 1101908 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:59:27.485970 1101908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:59:27.485997 1101908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:59:27.486131 1101908 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-149000 san=[192.168.50.177 192.168.50.177 localhost 127.0.0.1 minikube old-k8s-version-149000]
	I0717 19:59:27.667436 1101908 provision.go:172] copyRemoteCerts
	I0717 19:59:27.667514 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:59:27.667551 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.670875 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.671304 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.671340 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.671600 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.671851 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.672053 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.672222 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 19:59:27.764116 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:59:27.795726 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 19:59:27.827532 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:59:27.859734 1101908 provision.go:86] duration metric: configureAuth took 381.584228ms
	I0717 19:59:27.859769 1101908 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:59:27.860014 1101908 config.go:182] Loaded profile config "old-k8s-version-149000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 19:59:27.860125 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.863330 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.863915 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.863969 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.864318 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.864559 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.864735 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.864931 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.865114 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:27.865768 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:27.865791 1101908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:59:28.221755 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:59:28.221788 1101908 machine.go:91] provisioned docker machine in 1.030429206s
	I0717 19:59:28.221802 1101908 start.go:300] post-start starting for "old-k8s-version-149000" (driver="kvm2")
	I0717 19:59:28.221817 1101908 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:59:28.221868 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.222236 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:59:28.222265 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.225578 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.226092 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.226130 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.226268 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.226511 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.226695 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.226875 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 19:59:28.321338 1101908 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:59:28.326703 1101908 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:59:28.326747 1101908 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:59:28.326843 1101908 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:59:28.326969 1101908 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:59:28.327239 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:59:28.337536 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:28.366439 1101908 start.go:303] post-start completed in 144.619105ms
	I0717 19:59:28.366476 1101908 fix.go:56] fixHost completed within 25.763256574s
	I0717 19:59:28.366510 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.369661 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.370194 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.370249 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.370470 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.370758 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.370956 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.371192 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.371476 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:28.371943 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:28.371970 1101908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:59:28.498983 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623968.431200547
	
	I0717 19:59:28.499015 1101908 fix.go:206] guest clock: 1689623968.431200547
	I0717 19:59:28.499025 1101908 fix.go:219] Guest: 2023-07-17 19:59:28.431200547 +0000 UTC Remote: 2023-07-17 19:59:28.366482535 +0000 UTC m=+386.593094928 (delta=64.718012ms)
	I0717 19:59:28.499083 1101908 fix.go:190] guest clock delta is within tolerance: 64.718012ms
	I0717 19:59:28.499090 1101908 start.go:83] releasing machines lock for "old-k8s-version-149000", held for 25.895913429s
	I0717 19:59:28.499122 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.499449 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:28.502760 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.503338 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.503395 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.503746 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.504549 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.504804 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.504907 1101908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:59:28.504995 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.505142 1101908 ssh_runner.go:195] Run: cat /version.json
	I0717 19:59:28.505175 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.508832 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.508868 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.509347 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.509384 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.509412 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.509431 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.509539 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.509827 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.509888 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.510074 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.510126 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.510292 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.510284 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 19:59:28.510442 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	W0717 19:59:28.604171 1101908 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:59:28.604283 1101908 ssh_runner.go:195] Run: systemctl --version
	I0717 19:59:28.637495 1101908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:59:28.790306 1101908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:59:28.797261 1101908 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:59:28.797343 1101908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:59:28.822016 1101908 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:59:28.822056 1101908 start.go:469] detecting cgroup driver to use...
	I0717 19:59:28.822144 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:59:28.843785 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:59:28.863178 1101908 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:59:28.863248 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:59:28.880265 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:59:28.897122 1101908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:59:29.019759 1101908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:59:29.166490 1101908 docker.go:212] disabling docker service ...
	I0717 19:59:29.166561 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:59:29.188125 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:59:29.205693 1101908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:59:29.336805 1101908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:59:29.478585 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:59:29.494755 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:59:29.516478 1101908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0717 19:59:29.516633 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.527902 1101908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:59:29.528000 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.539443 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.551490 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.563407 1101908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:59:29.577575 1101908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:59:29.587749 1101908 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:59:29.587839 1101908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:59:29.602120 1101908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:59:29.613647 1101908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:59:29.730721 1101908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:59:29.907780 1101908 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:59:29.907916 1101908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:59:29.913777 1101908 start.go:537] Will wait 60s for crictl version
	I0717 19:59:29.913855 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:29.921083 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:59:29.955985 1101908 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:59:29.956099 1101908 ssh_runner.go:195] Run: crio --version
	I0717 19:59:30.011733 1101908 ssh_runner.go:195] Run: crio --version
	I0717 19:59:30.068591 1101908 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0717 19:59:25.744228 1103141 api_server.go:72] duration metric: took 2.539876638s to wait for apiserver process to appear ...
	I0717 19:59:25.744263 1103141 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:59:25.744295 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:25.744850 1103141 api_server.go:269] stopped: https://192.168.39.213:8443/healthz: Get "https://192.168.39.213:8443/healthz": dial tcp 192.168.39.213:8443: connect: connection refused
	I0717 19:59:26.245930 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.163298 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:29.163345 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:29.163362 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.197738 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:29.197812 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:29.245946 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.261723 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:29.261777 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:29.745343 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.753999 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:29.754040 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:30.245170 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:30.253748 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:30.253809 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:30.745290 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:30.760666 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:30.760706 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:31.244952 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:31.262412 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0717 19:59:31.284253 1103141 api_server.go:141] control plane version: v1.27.3
	I0717 19:59:31.284290 1103141 api_server.go:131] duration metric: took 5.540019245s to wait for apiserver health ...
	I0717 19:59:31.284303 1103141 cni.go:84] Creating CNI manager for ""
	I0717 19:59:31.284316 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:31.286828 1103141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:59:30.070665 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:30.074049 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:30.074479 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:30.074503 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:30.074871 1101908 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 19:59:30.080177 1101908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:30.094479 1101908 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 19:59:30.094543 1101908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:30.130526 1101908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 19:59:30.130599 1101908 ssh_runner.go:195] Run: which lz4
	I0717 19:59:30.135920 1101908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:59:30.140678 1101908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:59:30.140723 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0717 19:59:28.772996 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:30.785175 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:33.273857 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:30.427017 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:32.920586 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:31.288869 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:59:31.323116 1103141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:59:31.368054 1103141 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:59:31.392061 1103141 system_pods.go:59] 8 kube-system pods found
	I0717 19:59:31.392110 1103141 system_pods.go:61] "coredns-5d78c9869d-rgdz8" [d1cc8cd3-70eb-4315-89d9-40d4ef97a5c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:59:31.392122 1103141 system_pods.go:61] "etcd-embed-certs-114855" [4c8e5fe0-e26e-4244-b284-5a42b4247614] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:59:31.392136 1103141 system_pods.go:61] "kube-apiserver-embed-certs-114855" [3cc43f5e-6c56-4587-a69a-ce58c12f500d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:59:31.392146 1103141 system_pods.go:61] "kube-controller-manager-embed-certs-114855" [cadca801-1feb-45f9-ac3c-eca697f1919f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:59:31.392157 1103141 system_pods.go:61] "kube-proxy-lkncr" [9ec4e4ac-81a5-4547-ab3e-6a3db21cc19d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 19:59:31.392166 1103141 system_pods.go:61] "kube-scheduler-embed-certs-114855" [0e9a0435-a1d5-42bc-a051-1587cd479ac6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:59:31.392184 1103141 system_pods.go:61] "metrics-server-74d5c6b9c-pshr5" [2d4e6b33-c325-4aa5-8458-b604be762cbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:59:31.392192 1103141 system_pods.go:61] "storage-provisioner" [4f7b39f3-3fc5-4e41-9f58-aa1d938ce06f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 19:59:31.392199 1103141 system_pods.go:74] duration metric: took 24.119934ms to wait for pod list to return data ...
	I0717 19:59:31.392210 1103141 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:59:31.405136 1103141 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:59:31.405178 1103141 node_conditions.go:123] node cpu capacity is 2
	I0717 19:59:31.405192 1103141 node_conditions.go:105] duration metric: took 12.975462ms to run NodePressure ...
	I0717 19:59:31.405221 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:32.158757 1103141 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:59:32.167221 1103141 kubeadm.go:787] kubelet initialised
	I0717 19:59:32.167263 1103141 kubeadm.go:788] duration metric: took 8.462047ms waiting for restarted kubelet to initialise ...
	I0717 19:59:32.167277 1103141 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:32.178888 1103141 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:34.199125 1103141 pod_ready.go:102] pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:32.017439 1101908 crio.go:444] Took 1.881555 seconds to copy over tarball
	I0717 19:59:32.017535 1101908 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:59:35.573024 1101908 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.55545349s)
	I0717 19:59:35.573070 1101908 crio.go:451] Took 3.555594 seconds to extract the tarball
	I0717 19:59:35.573081 1101908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:59:35.622240 1101908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:35.672113 1101908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 19:59:35.672149 1101908 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:59:35.672223 1101908 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:35.672279 1101908 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:35.672325 1101908 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.672344 1101908 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:35.672453 1101908 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:35.672533 1101908 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 19:59:35.672545 1101908 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:35.672645 1101908 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 19:59:35.674063 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:35.674110 1101908 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 19:59:35.674127 1101908 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.674114 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:35.674068 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:35.674075 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:35.674208 1101908 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:35.674236 1101908 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 19:59:35.835219 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.840811 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:35.855242 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 19:59:35.857212 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:35.860547 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:35.864234 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:35.864519 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 19:59:35.958693 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:35.980110 1101908 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 19:59:35.980198 1101908 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.980258 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.057216 1101908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 19:59:36.057278 1101908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:36.057301 1101908 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 19:59:36.057334 1101908 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0717 19:59:36.057342 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.057362 1101908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 19:59:36.057383 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.057412 1101908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:36.057451 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.066796 1101908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 19:59:36.066859 1101908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:36.066944 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.084336 1101908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 19:59:36.084398 1101908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:36.084439 1101908 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 19:59:36.084454 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.084479 1101908 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 19:59:36.084520 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.208377 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:36.208641 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:36.208730 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0717 19:59:36.208827 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:36.208839 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:36.208879 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:36.208922 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0717 19:59:36.375090 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 19:59:36.375371 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 19:59:36.383660 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 19:59:36.383770 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 19:59:36.383841 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 19:59:36.383872 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 19:59:36.383950 1101908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0717 19:59:36.383986 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 19:59:36.388877 1101908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0717 19:59:36.388897 1101908 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0717 19:59:36.388941 1101908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0717 19:59:35.275990 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:37.773385 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:34.927926 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:36.940406 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:36.219570 1103141 pod_ready.go:102] pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:37.338137 1103141 pod_ready.go:92] pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:37.338209 1103141 pod_ready.go:81] duration metric: took 5.159283632s waiting for pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:37.338228 1103141 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:39.354623 1103141 pod_ready.go:102] pod "etcd-embed-certs-114855" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:37.751639 1101908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.362667245s)
	I0717 19:59:37.751681 1101908 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0717 19:59:37.751736 1101908 cache_images.go:92] LoadImages completed in 2.079569378s
	W0717 19:59:37.751899 1101908 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0717 19:59:37.752005 1101908 ssh_runner.go:195] Run: crio config
	I0717 19:59:37.844809 1101908 cni.go:84] Creating CNI manager for ""
	I0717 19:59:37.844845 1101908 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:37.844870 1101908 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:59:37.844896 1101908 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.177 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-149000 NodeName:old-k8s-version-149000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 19:59:37.845116 1101908 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-149000"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-149000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.177:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:59:37.845228 1101908 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-149000 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-149000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:59:37.845312 1101908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 19:59:37.859556 1101908 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:59:37.859640 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:59:37.872740 1101908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 19:59:37.891132 1101908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:59:37.911902 1101908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0717 19:59:37.933209 1101908 ssh_runner.go:195] Run: grep 192.168.50.177	control-plane.minikube.internal$ /etc/hosts
	I0717 19:59:37.937317 1101908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:37.950660 1101908 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000 for IP: 192.168.50.177
	I0717 19:59:37.950706 1101908 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:37.950921 1101908 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:59:37.950998 1101908 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:59:37.951128 1101908 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.key
	I0717 19:59:37.951227 1101908 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/apiserver.key.c699d2bc
	I0717 19:59:37.951298 1101908 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/proxy-client.key
	I0717 19:59:37.951487 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:59:37.951529 1101908 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:59:37.951541 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:59:37.951567 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:59:37.951593 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:59:37.951634 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:59:37.951691 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:37.952597 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:59:37.980488 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:59:38.008389 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:59:38.037605 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:59:38.066142 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:59:38.095838 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:59:38.123279 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:59:38.158528 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:59:38.190540 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:59:38.218519 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:59:38.245203 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:59:38.273077 1101908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:59:38.292610 1101908 ssh_runner.go:195] Run: openssl version
	I0717 19:59:38.298983 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:59:38.311477 1101908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:59:38.316847 1101908 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:59:38.316914 1101908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:59:38.323114 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:59:38.334773 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:59:38.346327 1101908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:59:38.351639 1101908 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:59:38.351712 1101908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:59:38.357677 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:59:38.369278 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:59:38.380948 1101908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:38.386116 1101908 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:38.386181 1101908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:38.392204 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:59:38.404677 1101908 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:59:38.409861 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:59:38.416797 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:59:38.424606 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:59:38.431651 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:59:38.439077 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:59:38.445660 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:59:38.452464 1101908 kubeadm.go:404] StartCluster: {Name:old-k8s-version-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-149000 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.177 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:59:38.452656 1101908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:59:38.452738 1101908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:38.485873 1101908 cri.go:89] found id: ""
	I0717 19:59:38.485972 1101908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:59:38.496998 1101908 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:59:38.497033 1101908 kubeadm.go:636] restartCluster start
	I0717 19:59:38.497096 1101908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:59:38.508054 1101908 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:38.509416 1101908 kubeconfig.go:92] found "old-k8s-version-149000" server: "https://192.168.50.177:8443"
	I0717 19:59:38.512586 1101908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:59:38.524412 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:38.524486 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:38.537824 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:39.038221 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:39.038331 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:39.053301 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:39.538741 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:39.538834 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:39.552525 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:40.038056 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:40.038173 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:40.052410 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:40.537953 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:40.538060 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:40.551667 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:41.038241 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:41.038361 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:41.053485 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:41.538300 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:41.538402 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:41.552741 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:39.773598 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:42.273083 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:39.423700 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:41.918498 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:43.918876 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:40.856641 1103141 pod_ready.go:92] pod "etcd-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:40.856671 1103141 pod_ready.go:81] duration metric: took 3.518433579s waiting for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:40.856684 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.377156 1103141 pod_ready.go:92] pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.377186 1103141 pod_ready.go:81] duration metric: took 1.520494525s waiting for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.377196 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.387651 1103141 pod_ready.go:92] pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.387680 1103141 pod_ready.go:81] duration metric: took 10.47667ms waiting for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.387692 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lkncr" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.394735 1103141 pod_ready.go:92] pod "kube-proxy-lkncr" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.394770 1103141 pod_ready.go:81] duration metric: took 7.070744ms waiting for pod "kube-proxy-lkncr" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.394784 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.402496 1103141 pod_ready.go:92] pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.402530 1103141 pod_ready.go:81] duration metric: took 7.737273ms waiting for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.402544 1103141 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:44.460075 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:42.038941 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:42.039027 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:42.054992 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:42.538144 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:42.538257 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:42.552160 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:43.038484 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:43.038599 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:43.052649 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:43.538407 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:43.538511 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:43.552927 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:44.038266 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:44.038396 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:44.051851 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:44.538425 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:44.538520 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:44.551726 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:45.038244 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:45.038359 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:45.053215 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:45.538908 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:45.539008 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:45.552009 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:46.038089 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:46.038204 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:46.051955 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:46.538209 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:46.538311 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:46.552579 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:44.273154 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:46.772548 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:45.919143 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:47.919930 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:46.964219 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:49.459411 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:47.038345 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:47.038434 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:47.051506 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:47.538770 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:47.538855 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:47.551813 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:48.038766 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:48.038900 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:48.053717 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:48.524471 1101908 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:59:48.524521 1101908 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:59:48.524542 1101908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:59:48.524625 1101908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:48.564396 1101908 cri.go:89] found id: ""
	I0717 19:59:48.564475 1101908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:59:48.582891 1101908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:59:48.594121 1101908 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:59:48.594212 1101908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:48.604963 1101908 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:48.604998 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:48.756875 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:49.645754 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:49.876047 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:49.996960 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:50.109251 1101908 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:59:50.109337 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:50.630868 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:51.130836 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:51.630446 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:51.659578 1101908 api_server.go:72] duration metric: took 1.550325604s to wait for apiserver process to appear ...
	I0717 19:59:51.659605 1101908 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:59:51.659625 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:48.773967 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:50.775054 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:53.274949 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:49.922365 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:52.422385 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:51.459819 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:53.958809 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:56.660515 1101908 api_server.go:269] stopped: https://192.168.50.177:8443/healthz: Get "https://192.168.50.177:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 19:59:55.773902 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:58.274862 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:54.427715 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:56.922668 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:57.161458 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:57.720749 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:57.720797 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:57.720816 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:57.828454 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:57.828489 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:58.160896 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:58.173037 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 19:59:58.173072 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 19:59:58.660738 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:58.672508 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 19:59:58.672551 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 19:59:59.161133 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:59.169444 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 200:
	ok
	I0717 19:59:59.179637 1101908 api_server.go:141] control plane version: v1.16.0
	I0717 19:59:59.179675 1101908 api_server.go:131] duration metric: took 7.520063574s to wait for apiserver health ...
	I0717 19:59:59.179689 1101908 cni.go:84] Creating CNI manager for ""
	I0717 19:59:59.179703 1101908 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:59.182357 1101908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:59:55.959106 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:58.458415 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:00.458582 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:59.184702 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:59:59.197727 1101908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:59:59.226682 1101908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:59:59.237874 1101908 system_pods.go:59] 7 kube-system pods found
	I0717 19:59:59.237911 1101908 system_pods.go:61] "coredns-5644d7b6d9-g7fjx" [f9f27bce-aaf6-43f8-8a4b-a87230ceed0e] Running
	I0717 19:59:59.237917 1101908 system_pods.go:61] "etcd-old-k8s-version-149000" [2c732d6d-8a38-401d-aebf-e439c7fcf530] Running
	I0717 19:59:59.237922 1101908 system_pods.go:61] "kube-apiserver-old-k8s-version-149000" [b7f2c355-86cd-4d4c-b7da-043094174829] Running
	I0717 19:59:59.237927 1101908 system_pods.go:61] "kube-controller-manager-old-k8s-version-149000" [30f723aa-a978-4fbb-9210-43a29284aa41] Running
	I0717 19:59:59.237931 1101908 system_pods.go:61] "kube-proxy-f68hg" [a39dea78-e9bb-4f1b-8615-a51a42c6d13f] Running
	I0717 19:59:59.237935 1101908 system_pods.go:61] "kube-scheduler-old-k8s-version-149000" [a84bce5d-82af-4282-a36f-0d1031715a1a] Running
	I0717 19:59:59.237938 1101908 system_pods.go:61] "storage-provisioner" [c5e96cda-ddbc-4d29-86c3-d7ac4c717f61] Running
	I0717 19:59:59.237944 1101908 system_pods.go:74] duration metric: took 11.222716ms to wait for pod list to return data ...
	I0717 19:59:59.237952 1101908 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:59:59.241967 1101908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:59:59.242003 1101908 node_conditions.go:123] node cpu capacity is 2
	I0717 19:59:59.242051 1101908 node_conditions.go:105] duration metric: took 4.091279ms to run NodePressure ...
	I0717 19:59:59.242080 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:59.612659 1101908 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:59:59.623317 1101908 retry.go:31] will retry after 338.189596ms: kubelet not initialised
	I0717 19:59:59.972718 1101908 retry.go:31] will retry after 522.339878ms: kubelet not initialised
	I0717 20:00:00.503134 1101908 retry.go:31] will retry after 523.863562ms: kubelet not initialised
	I0717 20:00:01.032819 1101908 retry.go:31] will retry after 993.099088ms: kubelet not initialised
	I0717 20:00:00.773342 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:02.775558 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:59.424228 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:01.424791 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:03.920321 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:02.462125 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:04.960081 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:02.031287 1101908 retry.go:31] will retry after 1.744721946s: kubelet not initialised
	I0717 20:00:03.780335 1101908 retry.go:31] will retry after 2.704259733s: kubelet not initialised
	I0717 20:00:06.491260 1101908 retry.go:31] will retry after 2.934973602s: kubelet not initialised
	I0717 20:00:05.273963 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:07.772710 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:06.428014 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:08.920105 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:07.459314 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:09.959084 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:09.433009 1101908 retry.go:31] will retry after 2.28873038s: kubelet not initialised
	I0717 20:00:11.729010 1101908 retry.go:31] will retry after 4.261199393s: kubelet not initialised
	I0717 20:00:09.772754 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:11.773102 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:11.424610 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:13.922384 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:11.959437 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:14.459152 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:15.999734 1101908 retry.go:31] will retry after 8.732603244s: kubelet not initialised
	I0717 20:00:14.278965 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:16.772786 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:16.424980 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:18.919729 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:16.460363 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:18.960012 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:18.773609 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:21.272529 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:23.272642 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:20.922495 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:23.422032 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:21.460808 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:23.959242 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:24.739282 1101908 retry.go:31] will retry after 8.040459769s: kubelet not initialised
	I0717 20:00:25.274297 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:27.773410 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:25.923167 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:28.420939 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:25.959431 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:27.960549 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:30.459601 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:30.274460 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.276595 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:30.428741 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.919601 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.459855 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:34.960084 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.784544 1101908 kubeadm.go:787] kubelet initialised
	I0717 20:00:32.784571 1101908 kubeadm.go:788] duration metric: took 33.171875609s waiting for restarted kubelet to initialise ...
	I0717 20:00:32.784579 1101908 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:00:32.789500 1101908 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9x6xd" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.795369 1101908 pod_ready.go:92] pod "coredns-5644d7b6d9-9x6xd" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.795396 1101908 pod_ready.go:81] duration metric: took 5.860061ms waiting for pod "coredns-5644d7b6d9-9x6xd" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.795406 1101908 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-g7fjx" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.800899 1101908 pod_ready.go:92] pod "coredns-5644d7b6d9-g7fjx" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.800922 1101908 pod_ready.go:81] duration metric: took 5.509805ms waiting for pod "coredns-5644d7b6d9-g7fjx" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.800931 1101908 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.806100 1101908 pod_ready.go:92] pod "etcd-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.806123 1101908 pod_ready.go:81] duration metric: took 5.185189ms waiting for pod "etcd-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.806139 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.810963 1101908 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.810990 1101908 pod_ready.go:81] duration metric: took 4.843622ms waiting for pod "kube-apiserver-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.811000 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.183907 1101908 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:33.183945 1101908 pod_ready.go:81] duration metric: took 372.931164ms waiting for pod "kube-controller-manager-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.183961 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f68hg" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.585028 1101908 pod_ready.go:92] pod "kube-proxy-f68hg" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:33.585064 1101908 pod_ready.go:81] duration metric: took 401.095806ms waiting for pod "kube-proxy-f68hg" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.585075 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.984668 1101908 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:33.984702 1101908 pod_ready.go:81] duration metric: took 399.618516ms waiting for pod "kube-scheduler-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.984719 1101908 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:36.392779 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:34.774126 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:37.273706 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:34.921839 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:37.434861 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:37.460518 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:39.960345 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:38.393483 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:40.893085 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:39.773390 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:41.773759 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:39.920512 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:41.920773 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:43.921648 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:42.458830 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:44.958864 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:43.393911 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:45.395481 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:44.273504 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:46.772509 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:45.923812 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:48.422996 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:47.459707 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:49.960056 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:47.892578 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:50.393881 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:48.774960 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:51.273048 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:50.919768 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:52.920372 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:52.458962 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:54.460345 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:52.892172 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:54.893802 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:53.775343 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:56.272701 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:55.427664 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:57.919163 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:56.961203 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:59.458439 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:57.393429 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:59.892089 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:58.772852 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:00.773814 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:03.272058 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:59.920118 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:01.920524 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:01.459281 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:03.460348 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:01.892908 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:04.392588 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:06.393093 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:05.272559 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:07.273883 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:04.421056 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:06.931053 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:05.960254 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:08.457727 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:10.459842 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:08.394141 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:10.892223 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:09.772505 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:11.772971 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:09.422626 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:11.423328 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:13.424365 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:12.958612 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:14.965490 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:12.893418 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:15.394472 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:14.272688 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:16.273685 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:15.919394 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:17.923047 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:17.460160 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:19.958439 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:17.894003 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:19.894407 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:18.772990 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:21.272821 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:23.273740 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:20.427751 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:22.920375 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:21.959239 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:23.959721 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:22.392669 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:24.392858 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:26.392896 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:25.773792 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:28.272610 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:25.423969 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:27.920156 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:25.960648 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:28.460460 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:28.393135 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:30.892597 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:30.273479 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:32.772964 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:29.920769 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:31.921078 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:30.959214 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:33.459431 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:32.892662 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:34.893997 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:35.271152 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:37.273194 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:34.423090 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:36.920078 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:35.960397 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:38.458322 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:40.459780 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:37.393337 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:39.394287 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:39.772604 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:42.273098 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:39.421175 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:41.422356 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:43.920740 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:42.959038 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:45.461396 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:41.891807 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:43.892286 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:45.894698 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:44.772741 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:46.774412 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:46.424856 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:48.425180 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:47.959378 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:49.960002 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:48.392683 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:50.393690 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:49.275313 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:51.773822 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:50.919701 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:52.919921 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:52.459957 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:54.958709 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:52.894991 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:55.392555 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:54.273372 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:56.775369 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:54.920834 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:56.921032 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:57.458730 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.460912 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:57.393828 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.892700 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.272482 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.774098 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.429623 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.920129 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:03.920308 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.958119 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:03.958450 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.894130 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:03.894522 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:05.895253 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:04.273903 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:06.773689 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:06.424487 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.427374 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:05.961652 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.457716 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:10.458998 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.392784 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:10.393957 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.774235 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:11.272040 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:13.273524 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:10.920257 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:12.921203 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:12.459321 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:14.460373 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:12.893440 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:15.392849 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:15.774096 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:18.274263 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:15.421911 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:17.922223 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:16.461304 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:18.958236 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:17.393857 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:19.893380 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:20.274441 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:22.773139 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:20.426046 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:22.919646 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:20.959049 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:23.460465 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:22.392918 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:24.892470 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:25.273192 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:27.273498 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:24.919892 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:26.921648 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:25.961037 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:28.458547 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:26.893611 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:29.393411 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:31.393789 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:29.771999 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:31.772639 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:29.419744 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:31.420846 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:33.422484 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:30.958391 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:33.457895 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:35.459845 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:33.893731 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:36.393503 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:34.272758 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:36.275172 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:35.920446 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:37.922565 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:37.460196 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:39.957808 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:38.394837 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:40.900948 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:38.772728 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:40.773003 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:43.273981 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:40.421480 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:42.919369 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:42.458683 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:44.458762 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:43.392899 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:45.893528 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:45.774587 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:48.273073 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:45.422093 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:47.429470 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:46.958556 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:49.457855 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:47.895376 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:50.392344 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:50.771704 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:52.772560 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:49.918779 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:51.919087 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:51.463426 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:53.957695 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:52.894219 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:54.894786 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:55.273619 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:57.775426 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:54.421093 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:56.424484 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:58.921289 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:55.959421 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:57.960287 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:00.460659 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:57.393604 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:59.394180 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:00.272948 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:02.274904 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:01.421007 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:03.422071 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:02.965138 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:05.458181 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:01.891831 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:03.892978 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:05.895017 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:04.772127 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:07.274312 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:05.920564 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:08.420835 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:07.459555 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:09.460645 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:08.392743 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:10.892887 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:09.772353 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:11.772877 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:12.368174 1102136 pod_ready.go:81] duration metric: took 4m0.000660307s waiting for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	E0717 20:03:12.368224 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:03:12.368251 1102136 pod_ready.go:38] duration metric: took 4m3.60522468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:03:12.368299 1102136 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:03:12.368343 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:12.368422 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:12.425640 1102136 cri.go:89] found id: "eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:12.425667 1102136 cri.go:89] found id: ""
	I0717 20:03:12.425684 1102136 logs.go:284] 1 containers: [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3]
	I0717 20:03:12.425749 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.430857 1102136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:12.430926 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:12.464958 1102136 cri.go:89] found id: "4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:12.464987 1102136 cri.go:89] found id: ""
	I0717 20:03:12.464996 1102136 logs.go:284] 1 containers: [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc]
	I0717 20:03:12.465063 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.470768 1102136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:12.470865 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:12.509622 1102136 cri.go:89] found id: "63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:12.509655 1102136 cri.go:89] found id: ""
	I0717 20:03:12.509665 1102136 logs.go:284] 1 containers: [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce]
	I0717 20:03:12.509718 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.514266 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:12.514346 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:12.556681 1102136 cri.go:89] found id: "0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:12.556705 1102136 cri.go:89] found id: ""
	I0717 20:03:12.556713 1102136 logs.go:284] 1 containers: [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5]
	I0717 20:03:12.556779 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.561653 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:12.561749 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:12.595499 1102136 cri.go:89] found id: "c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:12.595527 1102136 cri.go:89] found id: ""
	I0717 20:03:12.595537 1102136 logs.go:284] 1 containers: [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a]
	I0717 20:03:12.595603 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.600644 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:12.600728 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:12.635293 1102136 cri.go:89] found id: "2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:12.635327 1102136 cri.go:89] found id: ""
	I0717 20:03:12.635341 1102136 logs.go:284] 1 containers: [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9]
	I0717 20:03:12.635409 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.640445 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:12.640612 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:12.679701 1102136 cri.go:89] found id: ""
	I0717 20:03:12.679738 1102136 logs.go:284] 0 containers: []
	W0717 20:03:12.679748 1102136 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:12.679755 1102136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:12.679817 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:12.711772 1102136 cri.go:89] found id: "434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:12.711815 1102136 cri.go:89] found id: "cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:12.711822 1102136 cri.go:89] found id: ""
	I0717 20:03:12.711833 1102136 logs.go:284] 2 containers: [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379]
	I0717 20:03:12.711904 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.716354 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.720769 1102136 logs.go:123] Gathering logs for kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] ...
	I0717 20:03:12.720806 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:12.757719 1102136 logs.go:123] Gathering logs for kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] ...
	I0717 20:03:12.757766 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:12.804972 1102136 logs.go:123] Gathering logs for coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] ...
	I0717 20:03:12.805019 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:12.841021 1102136 logs.go:123] Gathering logs for kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] ...
	I0717 20:03:12.841055 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:12.890140 1102136 logs.go:123] Gathering logs for storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] ...
	I0717 20:03:12.890185 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:12.926177 1102136 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:12.926219 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:12.985838 1102136 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:12.985904 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:13.003223 1102136 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:13.003257 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:13.180312 1102136 logs.go:123] Gathering logs for kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] ...
	I0717 20:03:13.180361 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:13.234663 1102136 logs.go:123] Gathering logs for etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] ...
	I0717 20:03:13.234711 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:13.297008 1102136 logs.go:123] Gathering logs for storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] ...
	I0717 20:03:13.297065 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:13.335076 1102136 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:13.335110 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:10.919208 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:12.921588 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:11.958471 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:13.959630 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:12.893125 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:15.392702 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:13.901775 1102136 logs.go:123] Gathering logs for container status ...
	I0717 20:03:13.901828 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:16.451075 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:03:16.470892 1102136 api_server.go:72] duration metric: took 4m15.23519157s to wait for apiserver process to appear ...
	I0717 20:03:16.470922 1102136 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:03:16.470963 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:16.471033 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:16.515122 1102136 cri.go:89] found id: "eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:16.515151 1102136 cri.go:89] found id: ""
	I0717 20:03:16.515161 1102136 logs.go:284] 1 containers: [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3]
	I0717 20:03:16.515217 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.519734 1102136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:16.519828 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:16.552440 1102136 cri.go:89] found id: "4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:16.552491 1102136 cri.go:89] found id: ""
	I0717 20:03:16.552503 1102136 logs.go:284] 1 containers: [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc]
	I0717 20:03:16.552569 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.557827 1102136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:16.557935 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:16.598317 1102136 cri.go:89] found id: "63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:16.598344 1102136 cri.go:89] found id: ""
	I0717 20:03:16.598354 1102136 logs.go:284] 1 containers: [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce]
	I0717 20:03:16.598425 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.604234 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:16.604331 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:16.638321 1102136 cri.go:89] found id: "0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:16.638349 1102136 cri.go:89] found id: ""
	I0717 20:03:16.638360 1102136 logs.go:284] 1 containers: [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5]
	I0717 20:03:16.638429 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.642755 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:16.642840 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:16.681726 1102136 cri.go:89] found id: "c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:16.681763 1102136 cri.go:89] found id: ""
	I0717 20:03:16.681776 1102136 logs.go:284] 1 containers: [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a]
	I0717 20:03:16.681848 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.686317 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:16.686394 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:16.723303 1102136 cri.go:89] found id: "2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:16.723328 1102136 cri.go:89] found id: ""
	I0717 20:03:16.723337 1102136 logs.go:284] 1 containers: [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9]
	I0717 20:03:16.723387 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.727491 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:16.727586 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:16.756931 1102136 cri.go:89] found id: ""
	I0717 20:03:16.756960 1102136 logs.go:284] 0 containers: []
	W0717 20:03:16.756968 1102136 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:16.756975 1102136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:16.757036 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:16.788732 1102136 cri.go:89] found id: "434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:16.788819 1102136 cri.go:89] found id: "cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:16.788832 1102136 cri.go:89] found id: ""
	I0717 20:03:16.788845 1102136 logs.go:284] 2 containers: [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379]
	I0717 20:03:16.788913 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.793783 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.797868 1102136 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:16.797892 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:16.813545 1102136 logs.go:123] Gathering logs for etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] ...
	I0717 20:03:16.813603 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:16.865094 1102136 logs.go:123] Gathering logs for coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] ...
	I0717 20:03:16.865144 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:16.904821 1102136 logs.go:123] Gathering logs for kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] ...
	I0717 20:03:16.904869 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:16.945822 1102136 logs.go:123] Gathering logs for kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] ...
	I0717 20:03:16.945865 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:16.986531 1102136 logs.go:123] Gathering logs for storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] ...
	I0717 20:03:16.986580 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:17.023216 1102136 logs.go:123] Gathering logs for storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] ...
	I0717 20:03:17.023253 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:17.062491 1102136 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:17.062532 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:17.137024 1102136 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:17.137085 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:17.292825 1102136 logs.go:123] Gathering logs for kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] ...
	I0717 20:03:17.292881 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:17.345470 1102136 logs.go:123] Gathering logs for kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] ...
	I0717 20:03:17.345519 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:17.401262 1102136 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:17.401326 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:18.037384 1102136 logs.go:123] Gathering logs for container status ...
	I0717 20:03:18.037440 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:15.422242 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:17.011882 1102415 pod_ready.go:81] duration metric: took 4m0.000519116s waiting for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	E0717 20:03:17.011940 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:03:17.011951 1102415 pod_ready.go:38] duration metric: took 4m2.40035739s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:03:17.011974 1102415 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:03:17.012009 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:17.012082 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:17.072352 1102415 cri.go:89] found id: "210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:17.072381 1102415 cri.go:89] found id: ""
	I0717 20:03:17.072396 1102415 logs.go:284] 1 containers: [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a]
	I0717 20:03:17.072467 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.078353 1102415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:17.078432 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:17.122416 1102415 cri.go:89] found id: "bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:17.122455 1102415 cri.go:89] found id: ""
	I0717 20:03:17.122466 1102415 logs.go:284] 1 containers: [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2]
	I0717 20:03:17.122539 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.128311 1102415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:17.128394 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:17.166606 1102415 cri.go:89] found id: "cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:17.166637 1102415 cri.go:89] found id: ""
	I0717 20:03:17.166653 1102415 logs.go:284] 1 containers: [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524]
	I0717 20:03:17.166720 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.172605 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:17.172693 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:17.221109 1102415 cri.go:89] found id: "9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:17.221138 1102415 cri.go:89] found id: ""
	I0717 20:03:17.221149 1102415 logs.go:284] 1 containers: [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261]
	I0717 20:03:17.221216 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.226305 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:17.226394 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:17.271876 1102415 cri.go:89] found id: "76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:17.271902 1102415 cri.go:89] found id: ""
	I0717 20:03:17.271911 1102415 logs.go:284] 1 containers: [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951]
	I0717 20:03:17.271979 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.281914 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:17.282016 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:17.319258 1102415 cri.go:89] found id: "280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:17.319288 1102415 cri.go:89] found id: ""
	I0717 20:03:17.319309 1102415 logs.go:284] 1 containers: [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6]
	I0717 20:03:17.319376 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.323955 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:17.324102 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:17.357316 1102415 cri.go:89] found id: ""
	I0717 20:03:17.357355 1102415 logs.go:284] 0 containers: []
	W0717 20:03:17.357367 1102415 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:17.357375 1102415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:17.357458 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:17.409455 1102415 cri.go:89] found id: "19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:17.409553 1102415 cri.go:89] found id: "4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:17.409613 1102415 cri.go:89] found id: ""
	I0717 20:03:17.409626 1102415 logs.go:284] 2 containers: [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88]
	I0717 20:03:17.409706 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.417046 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.428187 1102415 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:17.428242 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:17.504409 1102415 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:17.504454 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:17.673502 1102415 logs.go:123] Gathering logs for etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] ...
	I0717 20:03:17.673576 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:17.728765 1102415 logs.go:123] Gathering logs for kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] ...
	I0717 20:03:17.728818 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:17.791192 1102415 logs.go:123] Gathering logs for container status ...
	I0717 20:03:17.791249 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:17.844883 1102415 logs.go:123] Gathering logs for storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] ...
	I0717 20:03:17.844944 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:17.891456 1102415 logs.go:123] Gathering logs for storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] ...
	I0717 20:03:17.891501 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:17.927018 1102415 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:17.927057 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:18.493310 1102415 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:18.493362 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:18.510255 1102415 logs.go:123] Gathering logs for kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] ...
	I0717 20:03:18.510302 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:18.558006 1102415 logs.go:123] Gathering logs for coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] ...
	I0717 20:03:18.558054 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:18.595130 1102415 logs.go:123] Gathering logs for kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] ...
	I0717 20:03:18.595166 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:18.636909 1102415 logs.go:123] Gathering logs for kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] ...
	I0717 20:03:18.636967 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:16.460091 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:18.959764 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:17.395341 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:19.891916 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:20.585703 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 20:03:20.591606 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 200:
	ok
	I0717 20:03:20.593225 1102136 api_server.go:141] control plane version: v1.27.3
	I0717 20:03:20.593249 1102136 api_server.go:131] duration metric: took 4.122320377s to wait for apiserver health ...
	I0717 20:03:20.593259 1102136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:03:20.593297 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:20.593391 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:20.636361 1102136 cri.go:89] found id: "eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:20.636401 1102136 cri.go:89] found id: ""
	I0717 20:03:20.636413 1102136 logs.go:284] 1 containers: [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3]
	I0717 20:03:20.636488 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.641480 1102136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:20.641622 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:20.674769 1102136 cri.go:89] found id: "4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:20.674791 1102136 cri.go:89] found id: ""
	I0717 20:03:20.674799 1102136 logs.go:284] 1 containers: [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc]
	I0717 20:03:20.674852 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.679515 1102136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:20.679587 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:20.717867 1102136 cri.go:89] found id: "63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:20.717914 1102136 cri.go:89] found id: ""
	I0717 20:03:20.717927 1102136 logs.go:284] 1 containers: [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce]
	I0717 20:03:20.717997 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.723020 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:20.723106 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:20.759930 1102136 cri.go:89] found id: "0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:20.759957 1102136 cri.go:89] found id: ""
	I0717 20:03:20.759968 1102136 logs.go:284] 1 containers: [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5]
	I0717 20:03:20.760032 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.764308 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:20.764378 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:20.804542 1102136 cri.go:89] found id: "c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:20.804570 1102136 cri.go:89] found id: ""
	I0717 20:03:20.804580 1102136 logs.go:284] 1 containers: [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a]
	I0717 20:03:20.804654 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.810036 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:20.810133 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:20.846655 1102136 cri.go:89] found id: "2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:20.846681 1102136 cri.go:89] found id: ""
	I0717 20:03:20.846689 1102136 logs.go:284] 1 containers: [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9]
	I0717 20:03:20.846745 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.853633 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:20.853741 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:20.886359 1102136 cri.go:89] found id: ""
	I0717 20:03:20.886393 1102136 logs.go:284] 0 containers: []
	W0717 20:03:20.886405 1102136 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:20.886413 1102136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:20.886489 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:20.924476 1102136 cri.go:89] found id: "434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:20.924508 1102136 cri.go:89] found id: "cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:20.924513 1102136 cri.go:89] found id: ""
	I0717 20:03:20.924524 1102136 logs.go:284] 2 containers: [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379]
	I0717 20:03:20.924576 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.929775 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.935520 1102136 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:20.935547 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:21.543605 1102136 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:21.543668 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:21.694696 1102136 logs.go:123] Gathering logs for kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] ...
	I0717 20:03:21.694763 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:21.736092 1102136 logs.go:123] Gathering logs for kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] ...
	I0717 20:03:21.736150 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:21.771701 1102136 logs.go:123] Gathering logs for kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] ...
	I0717 20:03:21.771749 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:21.822783 1102136 logs.go:123] Gathering logs for storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] ...
	I0717 20:03:21.822835 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:21.885797 1102136 logs.go:123] Gathering logs for storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] ...
	I0717 20:03:21.885851 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:21.930801 1102136 logs.go:123] Gathering logs for container status ...
	I0717 20:03:21.930842 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:21.985829 1102136 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:21.985862 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:22.056958 1102136 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:22.057010 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:22.074352 1102136 logs.go:123] Gathering logs for kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] ...
	I0717 20:03:22.074402 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:22.128386 1102136 logs.go:123] Gathering logs for etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] ...
	I0717 20:03:22.128437 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:22.188390 1102136 logs.go:123] Gathering logs for coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] ...
	I0717 20:03:22.188425 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:21.172413 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:03:21.194614 1102415 api_server.go:72] duration metric: took 4m13.166163785s to wait for apiserver process to appear ...
	I0717 20:03:21.194645 1102415 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:03:21.194687 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:21.194748 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:21.229142 1102415 cri.go:89] found id: "210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:21.229176 1102415 cri.go:89] found id: ""
	I0717 20:03:21.229186 1102415 logs.go:284] 1 containers: [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a]
	I0717 20:03:21.229255 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.234039 1102415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:21.234106 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:21.266482 1102415 cri.go:89] found id: "bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:21.266516 1102415 cri.go:89] found id: ""
	I0717 20:03:21.266527 1102415 logs.go:284] 1 containers: [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2]
	I0717 20:03:21.266596 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.271909 1102415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:21.271992 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:21.309830 1102415 cri.go:89] found id: "cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:21.309869 1102415 cri.go:89] found id: ""
	I0717 20:03:21.309878 1102415 logs.go:284] 1 containers: [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524]
	I0717 20:03:21.309943 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.314757 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:21.314838 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:21.356650 1102415 cri.go:89] found id: "9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:21.356681 1102415 cri.go:89] found id: ""
	I0717 20:03:21.356691 1102415 logs.go:284] 1 containers: [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261]
	I0717 20:03:21.356748 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.361582 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:21.361667 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:21.394956 1102415 cri.go:89] found id: "76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:21.394982 1102415 cri.go:89] found id: ""
	I0717 20:03:21.394994 1102415 logs.go:284] 1 containers: [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951]
	I0717 20:03:21.395056 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.400073 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:21.400143 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:21.441971 1102415 cri.go:89] found id: "280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:21.442004 1102415 cri.go:89] found id: ""
	I0717 20:03:21.442015 1102415 logs.go:284] 1 containers: [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6]
	I0717 20:03:21.442083 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.447189 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:21.447253 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:21.479477 1102415 cri.go:89] found id: ""
	I0717 20:03:21.479512 1102415 logs.go:284] 0 containers: []
	W0717 20:03:21.479524 1102415 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:21.479534 1102415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:21.479615 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:21.515474 1102415 cri.go:89] found id: "19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:21.515502 1102415 cri.go:89] found id: "4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:21.515510 1102415 cri.go:89] found id: ""
	I0717 20:03:21.515521 1102415 logs.go:284] 2 containers: [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88]
	I0717 20:03:21.515583 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.520398 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.525414 1102415 logs.go:123] Gathering logs for kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] ...
	I0717 20:03:21.525450 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:21.564455 1102415 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:21.564492 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:21.628081 1102415 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:21.628127 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:21.646464 1102415 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:21.646508 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:21.803148 1102415 logs.go:123] Gathering logs for kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] ...
	I0717 20:03:21.803205 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:21.856704 1102415 logs.go:123] Gathering logs for etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] ...
	I0717 20:03:21.856765 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:21.907860 1102415 logs.go:123] Gathering logs for coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] ...
	I0717 20:03:21.907912 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:21.953111 1102415 logs.go:123] Gathering logs for kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] ...
	I0717 20:03:21.953158 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:21.999947 1102415 logs.go:123] Gathering logs for kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] ...
	I0717 20:03:22.000008 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:22.061041 1102415 logs.go:123] Gathering logs for storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] ...
	I0717 20:03:22.061078 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:22.103398 1102415 logs.go:123] Gathering logs for storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] ...
	I0717 20:03:22.103432 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:22.141810 1102415 logs.go:123] Gathering logs for container status ...
	I0717 20:03:22.141864 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:22.186692 1102415 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:22.186726 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:24.737179 1102136 system_pods.go:59] 8 kube-system pods found
	I0717 20:03:24.737218 1102136 system_pods.go:61] "coredns-5d78c9869d-9mxdj" [fbff09fd-436d-4208-9187-b6312aa1c223] Running
	I0717 20:03:24.737225 1102136 system_pods.go:61] "etcd-no-preload-408472" [7125f19d-c1ed-4b1f-99be-207bbf5d8c70] Running
	I0717 20:03:24.737231 1102136 system_pods.go:61] "kube-apiserver-no-preload-408472" [0e54eaed-daad-434a-ad3b-96f7fb924099] Running
	I0717 20:03:24.737238 1102136 system_pods.go:61] "kube-controller-manager-no-preload-408472" [38ee8079-8142-45a9-8d5a-4abbc5c8bb3b] Running
	I0717 20:03:24.737243 1102136 system_pods.go:61] "kube-proxy-cntdn" [8653567b-abf9-468c-a030-45fc53fa0cc2] Running
	I0717 20:03:24.737248 1102136 system_pods.go:61] "kube-scheduler-no-preload-408472" [e51560a1-c1b0-407c-8635-df512bd033b5] Running
	I0717 20:03:24.737258 1102136 system_pods.go:61] "metrics-server-74d5c6b9c-hnngh" [dfff837e-dbba-4795-935d-9562d2744169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:24.737269 1102136 system_pods.go:61] "storage-provisioner" [1aefd8ef-dec9-4e37-8648-8e5a62622cd3] Running
	I0717 20:03:24.737278 1102136 system_pods.go:74] duration metric: took 4.144012317s to wait for pod list to return data ...
	I0717 20:03:24.737290 1102136 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:03:24.741216 1102136 default_sa.go:45] found service account: "default"
	I0717 20:03:24.741262 1102136 default_sa.go:55] duration metric: took 3.961044ms for default service account to be created ...
	I0717 20:03:24.741275 1102136 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:03:24.749060 1102136 system_pods.go:86] 8 kube-system pods found
	I0717 20:03:24.749094 1102136 system_pods.go:89] "coredns-5d78c9869d-9mxdj" [fbff09fd-436d-4208-9187-b6312aa1c223] Running
	I0717 20:03:24.749100 1102136 system_pods.go:89] "etcd-no-preload-408472" [7125f19d-c1ed-4b1f-99be-207bbf5d8c70] Running
	I0717 20:03:24.749104 1102136 system_pods.go:89] "kube-apiserver-no-preload-408472" [0e54eaed-daad-434a-ad3b-96f7fb924099] Running
	I0717 20:03:24.749109 1102136 system_pods.go:89] "kube-controller-manager-no-preload-408472" [38ee8079-8142-45a9-8d5a-4abbc5c8bb3b] Running
	I0717 20:03:24.749113 1102136 system_pods.go:89] "kube-proxy-cntdn" [8653567b-abf9-468c-a030-45fc53fa0cc2] Running
	I0717 20:03:24.749117 1102136 system_pods.go:89] "kube-scheduler-no-preload-408472" [e51560a1-c1b0-407c-8635-df512bd033b5] Running
	I0717 20:03:24.749125 1102136 system_pods.go:89] "metrics-server-74d5c6b9c-hnngh" [dfff837e-dbba-4795-935d-9562d2744169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:24.749139 1102136 system_pods.go:89] "storage-provisioner" [1aefd8ef-dec9-4e37-8648-8e5a62622cd3] Running
	I0717 20:03:24.749147 1102136 system_pods.go:126] duration metric: took 7.865246ms to wait for k8s-apps to be running ...
	I0717 20:03:24.749155 1102136 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:03:24.749215 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:03:24.765460 1102136 system_svc.go:56] duration metric: took 16.294048ms WaitForService to wait for kubelet.
	I0717 20:03:24.765503 1102136 kubeadm.go:581] duration metric: took 4m23.529814054s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:03:24.765587 1102136 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:03:24.769332 1102136 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:03:24.769368 1102136 node_conditions.go:123] node cpu capacity is 2
	I0717 20:03:24.769381 1102136 node_conditions.go:105] duration metric: took 3.788611ms to run NodePressure ...
	I0717 20:03:24.769392 1102136 start.go:228] waiting for startup goroutines ...
	I0717 20:03:24.769397 1102136 start.go:233] waiting for cluster config update ...
	I0717 20:03:24.769408 1102136 start.go:242] writing updated cluster config ...
	I0717 20:03:24.769830 1102136 ssh_runner.go:195] Run: rm -f paused
	I0717 20:03:24.827845 1102136 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:03:24.830624 1102136 out.go:177] * Done! kubectl is now configured to use "no-preload-408472" cluster and "default" namespace by default
	I0717 20:03:20.960575 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:23.458710 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:25.465429 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:21.893446 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:24.393335 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:26.393858 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:25.243410 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 20:03:25.250670 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 200:
	ok
	I0717 20:03:25.252086 1102415 api_server.go:141] control plane version: v1.27.3
	I0717 20:03:25.252111 1102415 api_server.go:131] duration metric: took 4.0574608s to wait for apiserver health ...
	I0717 20:03:25.252121 1102415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:03:25.252146 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:25.252197 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:25.286754 1102415 cri.go:89] found id: "210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:25.286785 1102415 cri.go:89] found id: ""
	I0717 20:03:25.286795 1102415 logs.go:284] 1 containers: [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a]
	I0717 20:03:25.286867 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.292653 1102415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:25.292733 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:25.328064 1102415 cri.go:89] found id: "bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:25.328092 1102415 cri.go:89] found id: ""
	I0717 20:03:25.328101 1102415 logs.go:284] 1 containers: [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2]
	I0717 20:03:25.328170 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.333727 1102415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:25.333798 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:25.368132 1102415 cri.go:89] found id: "cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:25.368159 1102415 cri.go:89] found id: ""
	I0717 20:03:25.368167 1102415 logs.go:284] 1 containers: [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524]
	I0717 20:03:25.368245 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.373091 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:25.373197 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:25.414136 1102415 cri.go:89] found id: "9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:25.414165 1102415 cri.go:89] found id: ""
	I0717 20:03:25.414175 1102415 logs.go:284] 1 containers: [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261]
	I0717 20:03:25.414229 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.424603 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:25.424679 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:25.470289 1102415 cri.go:89] found id: "76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:25.470320 1102415 cri.go:89] found id: ""
	I0717 20:03:25.470331 1102415 logs.go:284] 1 containers: [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951]
	I0717 20:03:25.470401 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.476760 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:25.476851 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:25.511350 1102415 cri.go:89] found id: "280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:25.511379 1102415 cri.go:89] found id: ""
	I0717 20:03:25.511390 1102415 logs.go:284] 1 containers: [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6]
	I0717 20:03:25.511459 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.516259 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:25.516339 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:25.553868 1102415 cri.go:89] found id: ""
	I0717 20:03:25.553913 1102415 logs.go:284] 0 containers: []
	W0717 20:03:25.553925 1102415 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:25.553932 1102415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:25.554025 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:25.589810 1102415 cri.go:89] found id: "19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:25.589844 1102415 cri.go:89] found id: "4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:25.589851 1102415 cri.go:89] found id: ""
	I0717 20:03:25.589862 1102415 logs.go:284] 2 containers: [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88]
	I0717 20:03:25.589924 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.594968 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.598953 1102415 logs.go:123] Gathering logs for kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] ...
	I0717 20:03:25.598977 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:25.640632 1102415 logs.go:123] Gathering logs for kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] ...
	I0717 20:03:25.640678 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:25.692768 1102415 logs.go:123] Gathering logs for storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] ...
	I0717 20:03:25.692812 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:25.728461 1102415 logs.go:123] Gathering logs for container status ...
	I0717 20:03:25.728500 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:25.779239 1102415 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:25.779278 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:25.794738 1102415 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:25.794790 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:25.966972 1102415 logs.go:123] Gathering logs for etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] ...
	I0717 20:03:25.967016 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:26.017430 1102415 logs.go:123] Gathering logs for coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] ...
	I0717 20:03:26.017467 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:26.053983 1102415 logs.go:123] Gathering logs for kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] ...
	I0717 20:03:26.054017 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:26.092510 1102415 logs.go:123] Gathering logs for storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] ...
	I0717 20:03:26.092544 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:26.127038 1102415 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:26.127071 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:26.728858 1102415 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:26.728911 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:26.792099 1102415 logs.go:123] Gathering logs for kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] ...
	I0717 20:03:26.792146 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:29.360633 1102415 system_pods.go:59] 8 kube-system pods found
	I0717 20:03:29.360678 1102415 system_pods.go:61] "coredns-5d78c9869d-rjqsv" [f27e2de9-9849-40e2-b6dc-1ee27537b1e6] Running
	I0717 20:03:29.360686 1102415 system_pods.go:61] "etcd-default-k8s-diff-port-711413" [15f74e3f-61a1-4464-bbff-d336f6df4b6e] Running
	I0717 20:03:29.360694 1102415 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-711413" [c164ab32-6a1d-4079-9b58-7da96eabc60e] Running
	I0717 20:03:29.360701 1102415 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-711413" [7dc149c8-be03-4fd6-b945-c63e95a28470] Running
	I0717 20:03:29.360708 1102415 system_pods.go:61] "kube-proxy-9qfpg" [ecb84bb9-57a2-4a42-8104-b792d38479ca] Running
	I0717 20:03:29.360714 1102415 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-711413" [f2cb374f-674e-4a08-82e0-0932b732b485] Running
	I0717 20:03:29.360727 1102415 system_pods.go:61] "metrics-server-74d5c6b9c-hzcd7" [17e01399-9910-4f01-abe7-3eae271af1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:29.360745 1102415 system_pods.go:61] "storage-provisioner" [43705714-97f1-4b06-8eeb-04d60c22112a] Running
	I0717 20:03:29.360755 1102415 system_pods.go:74] duration metric: took 4.108627852s to wait for pod list to return data ...
	I0717 20:03:29.360764 1102415 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:03:29.364887 1102415 default_sa.go:45] found service account: "default"
	I0717 20:03:29.364918 1102415 default_sa.go:55] duration metric: took 4.142278ms for default service account to be created ...
	I0717 20:03:29.364927 1102415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:03:29.372734 1102415 system_pods.go:86] 8 kube-system pods found
	I0717 20:03:29.372774 1102415 system_pods.go:89] "coredns-5d78c9869d-rjqsv" [f27e2de9-9849-40e2-b6dc-1ee27537b1e6] Running
	I0717 20:03:29.372783 1102415 system_pods.go:89] "etcd-default-k8s-diff-port-711413" [15f74e3f-61a1-4464-bbff-d336f6df4b6e] Running
	I0717 20:03:29.372791 1102415 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-711413" [c164ab32-6a1d-4079-9b58-7da96eabc60e] Running
	I0717 20:03:29.372799 1102415 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-711413" [7dc149c8-be03-4fd6-b945-c63e95a28470] Running
	I0717 20:03:29.372806 1102415 system_pods.go:89] "kube-proxy-9qfpg" [ecb84bb9-57a2-4a42-8104-b792d38479ca] Running
	I0717 20:03:29.372813 1102415 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-711413" [f2cb374f-674e-4a08-82e0-0932b732b485] Running
	I0717 20:03:29.372824 1102415 system_pods.go:89] "metrics-server-74d5c6b9c-hzcd7" [17e01399-9910-4f01-abe7-3eae271af1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:29.372832 1102415 system_pods.go:89] "storage-provisioner" [43705714-97f1-4b06-8eeb-04d60c22112a] Running
	I0717 20:03:29.372843 1102415 system_pods.go:126] duration metric: took 7.908204ms to wait for k8s-apps to be running ...
	I0717 20:03:29.372857 1102415 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:03:29.372916 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:03:29.393783 1102415 system_svc.go:56] duration metric: took 20.914205ms WaitForService to wait for kubelet.
	I0717 20:03:29.393821 1102415 kubeadm.go:581] duration metric: took 4m21.365424408s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:03:29.393853 1102415 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:03:29.398018 1102415 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:03:29.398052 1102415 node_conditions.go:123] node cpu capacity is 2
	I0717 20:03:29.398064 1102415 node_conditions.go:105] duration metric: took 4.205596ms to run NodePressure ...
	I0717 20:03:29.398076 1102415 start.go:228] waiting for startup goroutines ...
	I0717 20:03:29.398082 1102415 start.go:233] waiting for cluster config update ...
	I0717 20:03:29.398102 1102415 start.go:242] writing updated cluster config ...
	I0717 20:03:29.398468 1102415 ssh_runner.go:195] Run: rm -f paused
	I0717 20:03:29.454497 1102415 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:03:29.457512 1102415 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-711413" cluster and "default" namespace by default
	I0717 20:03:27.959261 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:30.460004 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:28.394465 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:30.892361 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:32.957801 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:34.958305 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:32.892903 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:35.392748 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:36.958526 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:38.958779 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:37.393705 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:39.892551 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:41.458525 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:42.402712 1103141 pod_ready.go:81] duration metric: took 4m0.00015085s waiting for pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace to be "Ready" ...
	E0717 20:03:42.402748 1103141 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:03:42.402774 1103141 pod_ready.go:38] duration metric: took 4m10.235484044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:03:42.402819 1103141 kubeadm.go:640] restartCluster took 4m30.682189828s
	W0717 20:03:42.402887 1103141 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 20:03:42.402946 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 20:03:42.393799 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:44.394199 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:46.892897 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:48.895295 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:51.394267 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:53.894027 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:56.393652 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:58.896895 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:01.393396 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:03.892923 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:05.894423 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:08.394591 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:10.893136 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:14.851948 1103141 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.44897498s)
	I0717 20:04:14.852044 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:04:14.868887 1103141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:04:14.879707 1103141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:04:14.890657 1103141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:04:14.890724 1103141 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 20:04:14.961576 1103141 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 20:04:14.961661 1103141 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:04:15.128684 1103141 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:04:15.128835 1103141 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:04:15.128966 1103141 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:04:15.334042 1103141 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:04:15.336736 1103141 out.go:204]   - Generating certificates and keys ...
	I0717 20:04:15.336885 1103141 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:04:15.336966 1103141 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:04:15.337097 1103141 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 20:04:15.337201 1103141 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 20:04:15.337312 1103141 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 20:04:15.337393 1103141 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 20:04:15.337769 1103141 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 20:04:15.338490 1103141 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 20:04:15.338931 1103141 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 20:04:15.339490 1103141 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 20:04:15.339994 1103141 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 20:04:15.340076 1103141 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:04:15.714920 1103141 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:04:15.892169 1103141 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:04:16.203610 1103141 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:04:16.346085 1103141 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:04:16.364315 1103141 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:04:16.365521 1103141 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:04:16.366077 1103141 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 20:04:16.503053 1103141 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:04:13.393067 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:15.394199 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:16.505772 1103141 out.go:204]   - Booting up control plane ...
	I0717 20:04:16.505925 1103141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:04:16.506056 1103141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:04:16.511321 1103141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:04:16.513220 1103141 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:04:16.516069 1103141 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 20:04:17.892626 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:19.893760 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:25.520496 1103141 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003077 seconds
	I0717 20:04:25.520676 1103141 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 20:04:25.541790 1103141 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 20:04:26.093172 1103141 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 20:04:26.093446 1103141 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-114855 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 20:04:26.614680 1103141 kubeadm.go:322] [bootstrap-token] Using token: nbkipc.s1xu11jkn2pd9jvz
	I0717 20:04:22.393296 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:24.395001 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:26.617034 1103141 out.go:204]   - Configuring RBAC rules ...
	I0717 20:04:26.617210 1103141 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 20:04:26.625795 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 20:04:26.645311 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 20:04:26.650977 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 20:04:26.656523 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 20:04:26.662996 1103141 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 20:04:26.691726 1103141 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 20:04:26.969700 1103141 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 20:04:27.038459 1103141 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 20:04:27.039601 1103141 kubeadm.go:322] 
	I0717 20:04:27.039723 1103141 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 20:04:27.039753 1103141 kubeadm.go:322] 
	I0717 20:04:27.039848 1103141 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 20:04:27.039857 1103141 kubeadm.go:322] 
	I0717 20:04:27.039879 1103141 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 20:04:27.039945 1103141 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 20:04:27.040023 1103141 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 20:04:27.040036 1103141 kubeadm.go:322] 
	I0717 20:04:27.040114 1103141 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 20:04:27.040123 1103141 kubeadm.go:322] 
	I0717 20:04:27.040192 1103141 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 20:04:27.040202 1103141 kubeadm.go:322] 
	I0717 20:04:27.040302 1103141 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 20:04:27.040419 1103141 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 20:04:27.040533 1103141 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 20:04:27.040543 1103141 kubeadm.go:322] 
	I0717 20:04:27.040653 1103141 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 20:04:27.040780 1103141 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 20:04:27.040792 1103141 kubeadm.go:322] 
	I0717 20:04:27.040917 1103141 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nbkipc.s1xu11jkn2pd9jvz \
	I0717 20:04:27.041051 1103141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 20:04:27.041083 1103141 kubeadm.go:322] 	--control-plane 
	I0717 20:04:27.041093 1103141 kubeadm.go:322] 
	I0717 20:04:27.041196 1103141 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 20:04:27.041200 1103141 kubeadm.go:322] 
	I0717 20:04:27.041276 1103141 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nbkipc.s1xu11jkn2pd9jvz \
	I0717 20:04:27.041420 1103141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 20:04:27.042440 1103141 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 20:04:27.042466 1103141 cni.go:84] Creating CNI manager for ""
	I0717 20:04:27.042512 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 20:04:27.046805 1103141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 20:04:27.049084 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 20:04:27.115952 1103141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 20:04:27.155521 1103141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 20:04:27.155614 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:27.155620 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=embed-certs-114855 minikube.k8s.io/updated_at=2023_07_17T20_04_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:27.604520 1103141 ops.go:34] apiserver oom_adj: -16
	I0717 20:04:27.604687 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:28.204384 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:28.703799 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:29.203981 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:29.703475 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:30.204062 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:30.703323 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:26.892819 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:28.895201 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:31.393384 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:31.204070 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:31.704206 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:32.204069 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:32.704193 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:33.203936 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:33.703692 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:34.203584 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:34.704039 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:35.204118 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:35.703385 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:33.893262 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:33.985163 1101908 pod_ready.go:81] duration metric: took 4m0.000422638s waiting for pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace to be "Ready" ...
	E0717 20:04:33.985205 1101908 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:04:33.985241 1101908 pod_ready.go:38] duration metric: took 4m1.200649003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:04:33.985298 1101908 kubeadm.go:640] restartCluster took 4m55.488257482s
	W0717 20:04:33.985385 1101908 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 20:04:33.985432 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 20:04:36.203827 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:36.703377 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:37.203981 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:37.703376 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:38.203498 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:38.703751 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:39.204099 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:39.704172 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:39.830734 1103141 kubeadm.go:1081] duration metric: took 12.675193605s to wait for elevateKubeSystemPrivileges.
	I0717 20:04:39.830771 1103141 kubeadm.go:406] StartCluster complete in 5m28.184955104s
	I0717 20:04:39.830796 1103141 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:04:39.830918 1103141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:04:39.833157 1103141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:04:39.834602 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 20:04:39.834801 1103141 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:04:39.834815 1103141 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 20:04:39.835031 1103141 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-114855"
	I0717 20:04:39.835054 1103141 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-114855"
	W0717 20:04:39.835062 1103141 addons.go:240] addon storage-provisioner should already be in state true
	I0717 20:04:39.835120 1103141 host.go:66] Checking if "embed-certs-114855" exists ...
	I0717 20:04:39.835243 1103141 addons.go:69] Setting default-storageclass=true in profile "embed-certs-114855"
	I0717 20:04:39.835240 1103141 addons.go:69] Setting metrics-server=true in profile "embed-certs-114855"
	I0717 20:04:39.835265 1103141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-114855"
	I0717 20:04:39.835268 1103141 addons.go:231] Setting addon metrics-server=true in "embed-certs-114855"
	W0717 20:04:39.835277 1103141 addons.go:240] addon metrics-server should already be in state true
	I0717 20:04:39.835324 1103141 host.go:66] Checking if "embed-certs-114855" exists ...
	I0717 20:04:39.835732 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.835742 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.835801 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.835831 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.835799 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.835916 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.855470 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0717 20:04:39.855482 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35595
	I0717 20:04:39.855481 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0717 20:04:39.856035 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.856107 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.856127 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.856776 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.856802 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.856872 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.856886 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.856937 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.856967 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.857216 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.857328 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.857353 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.857979 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.858022 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.858249 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.858296 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.858559 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.868852 1103141 addons.go:231] Setting addon default-storageclass=true in "embed-certs-114855"
	W0717 20:04:39.868889 1103141 addons.go:240] addon default-storageclass should already be in state true
	I0717 20:04:39.868930 1103141 host.go:66] Checking if "embed-certs-114855" exists ...
	I0717 20:04:39.869376 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.869426 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.877028 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37179
	I0717 20:04:39.877916 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.878347 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
	I0717 20:04:39.878690 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.878713 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.879085 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.879732 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.879754 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.879765 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.879950 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.880175 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.880381 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.882729 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 20:04:39.885818 1103141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:04:39.883284 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 20:04:39.888145 1103141 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:04:39.888171 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 20:04:39.888202 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 20:04:39.891651 1103141 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 20:04:39.893769 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 20:04:39.893066 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.893799 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 20:04:39.893831 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 20:04:39.893840 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 20:04:39.893879 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.894206 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 20:04:39.894454 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 20:04:39.894689 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 20:04:39.894878 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 20:04:39.895562 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0717 20:04:39.896172 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.896799 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.896825 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.897316 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.897969 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.898007 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.898778 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.899616 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 20:04:39.899645 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.899895 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 20:04:39.900193 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 20:04:39.900575 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 20:04:39.900770 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 20:04:39.915966 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I0717 20:04:39.916539 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.917101 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.917123 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.917530 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.917816 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.919631 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 20:04:39.919916 1103141 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 20:04:39.919936 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 20:04:39.919957 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 20:04:39.926132 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.926487 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 20:04:39.926520 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.926779 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 20:04:39.927115 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 20:04:39.927327 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 20:04:39.927522 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 20:04:40.077079 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 20:04:40.077106 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 20:04:40.084344 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 20:04:40.114809 1103141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:04:40.123795 1103141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 20:04:40.149950 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 20:04:40.149977 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 20:04:40.222818 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:04:40.222855 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 20:04:40.290773 1103141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:04:40.464132 1103141 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-114855" context rescaled to 1 replicas
	I0717 20:04:40.464182 1103141 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:04:40.468285 1103141 out.go:177] * Verifying Kubernetes components...
	I0717 20:04:40.470824 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:04:42.565704 1103141 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.481305344s)
	I0717 20:04:42.565749 1103141 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 20:04:43.290667 1103141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.175803142s)
	I0717 20:04:43.290744 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.290759 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.290778 1103141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.166947219s)
	I0717 20:04:43.290822 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.290840 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.291087 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.291217 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291225 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291238 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291241 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291254 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.291261 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.291268 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.291272 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.291613 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.291662 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291671 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291732 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.291756 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291764 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291775 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.291784 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.292436 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.292456 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.292471 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.439222 1103141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.148389848s)
	I0717 20:04:43.439268 1103141 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.968393184s)
	I0717 20:04:43.439310 1103141 node_ready.go:35] waiting up to 6m0s for node "embed-certs-114855" to be "Ready" ...
	I0717 20:04:43.439357 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.439401 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.439784 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.439806 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.439863 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.439932 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.440202 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.440220 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.440226 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.440232 1103141 addons.go:467] Verifying addon metrics-server=true in "embed-certs-114855"
	I0717 20:04:43.443066 1103141 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 20:04:43.445240 1103141 addons.go:502] enable addons completed in 3.610419127s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 20:04:43.494952 1103141 node_ready.go:49] node "embed-certs-114855" has status "Ready":"True"
	I0717 20:04:43.495002 1103141 node_ready.go:38] duration metric: took 55.676022ms waiting for node "embed-certs-114855" to be "Ready" ...
	I0717 20:04:43.495017 1103141 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:04:43.579632 1103141 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-9dkzb" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.676633 1103141 pod_ready.go:92] pod "coredns-5d78c9869d-9dkzb" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.676664 1103141 pod_ready.go:81] duration metric: took 1.096981736s waiting for pod "coredns-5d78c9869d-9dkzb" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.676677 1103141 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-gq2b2" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.683019 1103141 pod_ready.go:92] pod "coredns-5d78c9869d-gq2b2" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.683061 1103141 pod_ready.go:81] duration metric: took 6.376086ms waiting for pod "coredns-5d78c9869d-gq2b2" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.683077 1103141 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.691140 1103141 pod_ready.go:92] pod "etcd-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.691166 1103141 pod_ready.go:81] duration metric: took 8.082867ms waiting for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.691180 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.713413 1103141 pod_ready.go:92] pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.713448 1103141 pod_ready.go:81] duration metric: took 22.261351ms waiting for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.713462 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.728761 1103141 pod_ready.go:92] pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.728797 1103141 pod_ready.go:81] duration metric: took 15.326363ms waiting for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.728813 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bfvnl" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.583863 1103141 pod_ready.go:92] pod "kube-proxy-bfvnl" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:45.583901 1103141 pod_ready.go:81] duration metric: took 855.078548ms waiting for pod "kube-proxy-bfvnl" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.583915 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.867684 1103141 pod_ready.go:92] pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:45.867719 1103141 pod_ready.go:81] duration metric: took 283.796193ms waiting for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.867735 1103141 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:48.274479 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:50.278380 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:52.775046 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:54.775545 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:56.776685 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:59.275966 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:57.110722 1101908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (23.125251743s)
	I0717 20:04:57.110813 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:04:57.124991 1101908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:04:57.136828 1101908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:04:57.146898 1101908 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:04:57.146965 1101908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0717 20:04:57.390116 1101908 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 20:05:01.281623 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:03.776009 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:10.335351 1101908 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 20:05:10.335447 1101908 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:05:10.335566 1101908 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:05:10.335703 1101908 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:05:10.335829 1101908 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:05:10.335949 1101908 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:05:10.336064 1101908 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:05:10.336135 1101908 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 20:05:10.336220 1101908 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:05:10.338257 1101908 out.go:204]   - Generating certificates and keys ...
	I0717 20:05:10.338354 1101908 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:05:10.338443 1101908 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:05:10.338558 1101908 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 20:05:10.338681 1101908 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 20:05:10.338792 1101908 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 20:05:10.338855 1101908 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 20:05:10.338950 1101908 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 20:05:10.339044 1101908 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 20:05:10.339160 1101908 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 20:05:10.339264 1101908 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 20:05:10.339326 1101908 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 20:05:10.339403 1101908 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:05:10.339477 1101908 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:05:10.339556 1101908 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:05:10.339650 1101908 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:05:10.339727 1101908 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:05:10.339820 1101908 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:05:10.341550 1101908 out.go:204]   - Booting up control plane ...
	I0717 20:05:10.341674 1101908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:05:10.341797 1101908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:05:10.341892 1101908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:05:10.341982 1101908 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:05:10.342180 1101908 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 20:05:10.342290 1101908 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005656 seconds
	I0717 20:05:10.342399 1101908 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 20:05:10.342515 1101908 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 20:05:10.342582 1101908 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 20:05:10.342742 1101908 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-149000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 20:05:10.342830 1101908 kubeadm.go:322] [bootstrap-token] Using token: ki6f1y.fknzxf03oj84iyat
	I0717 20:05:10.344845 1101908 out.go:204]   - Configuring RBAC rules ...
	I0717 20:05:10.344980 1101908 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 20:05:10.345153 1101908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 20:05:10.345318 1101908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 20:05:10.345473 1101908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 20:05:10.345600 1101908 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 20:05:10.345664 1101908 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 20:05:10.345739 1101908 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 20:05:10.345750 1101908 kubeadm.go:322] 
	I0717 20:05:10.345834 1101908 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 20:05:10.345843 1101908 kubeadm.go:322] 
	I0717 20:05:10.345939 1101908 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 20:05:10.345947 1101908 kubeadm.go:322] 
	I0717 20:05:10.345983 1101908 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 20:05:10.346067 1101908 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 20:05:10.346139 1101908 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 20:05:10.346148 1101908 kubeadm.go:322] 
	I0717 20:05:10.346248 1101908 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 20:05:10.346356 1101908 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 20:05:10.346470 1101908 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 20:05:10.346480 1101908 kubeadm.go:322] 
	I0717 20:05:10.346588 1101908 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0717 20:05:10.346686 1101908 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 20:05:10.346695 1101908 kubeadm.go:322] 
	I0717 20:05:10.346821 1101908 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ki6f1y.fknzxf03oj84iyat \
	I0717 20:05:10.346997 1101908 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 20:05:10.347033 1101908 kubeadm.go:322]     --control-plane 	  
	I0717 20:05:10.347042 1101908 kubeadm.go:322] 
	I0717 20:05:10.347152 1101908 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 20:05:10.347161 1101908 kubeadm.go:322] 
	I0717 20:05:10.347260 1101908 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ki6f1y.fknzxf03oj84iyat \
	I0717 20:05:10.347429 1101908 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 20:05:10.347449 1101908 cni.go:84] Creating CNI manager for ""
	I0717 20:05:10.347463 1101908 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 20:05:10.349875 1101908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 20:05:06.284772 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:08.777303 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:10.351592 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 20:05:10.370891 1101908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 20:05:10.395381 1101908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 20:05:10.395477 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=old-k8s-version-149000 minikube.k8s.io/updated_at=2023_07_17T20_05_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:10.395473 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:10.663627 1101908 ops.go:34] apiserver oom_adj: -16
	I0717 20:05:10.663730 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:11.311991 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:11.812120 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:11.275701 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:13.277070 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:12.312047 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:12.811579 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:13.311876 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:13.811911 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:14.311514 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:14.811938 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:15.312088 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:15.812089 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:16.312164 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:16.812065 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:15.776961 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:17.778204 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:20.275642 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:17.312322 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:17.811428 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:18.312070 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:18.812245 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:19.311363 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:19.811909 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:20.311343 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:20.811869 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:21.311974 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:21.811429 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:22.311474 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:22.811809 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:23.311574 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:23.812246 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:24.312115 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:24.812132 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:25.311694 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:25.457162 1101908 kubeadm.go:1081] duration metric: took 15.061765556s to wait for elevateKubeSystemPrivileges.
	I0717 20:05:25.457213 1101908 kubeadm.go:406] StartCluster complete in 5m47.004786394s
	I0717 20:05:25.457273 1101908 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:05:25.457431 1101908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:05:25.459593 1101908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:05:25.459942 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 20:05:25.460139 1101908 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 20:05:25.460267 1101908 config.go:182] Loaded profile config "old-k8s-version-149000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 20:05:25.460272 1101908 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-149000"
	I0717 20:05:25.460409 1101908 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-149000"
	W0717 20:05:25.460419 1101908 addons.go:240] addon storage-provisioner should already be in state true
	I0717 20:05:25.460516 1101908 host.go:66] Checking if "old-k8s-version-149000" exists ...
	I0717 20:05:25.460284 1101908 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-149000"
	I0717 20:05:25.460709 1101908 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-149000"
	W0717 20:05:25.460727 1101908 addons.go:240] addon metrics-server should already be in state true
	I0717 20:05:25.460294 1101908 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-149000"
	I0717 20:05:25.460771 1101908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-149000"
	I0717 20:05:25.460793 1101908 host.go:66] Checking if "old-k8s-version-149000" exists ...
	I0717 20:05:25.461033 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.461061 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.461100 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.461128 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.461201 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.461227 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.487047 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0717 20:05:25.487091 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44607
	I0717 20:05:25.487066 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I0717 20:05:25.487833 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.487898 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.487930 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.488571 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.488595 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.488597 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.488615 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.488632 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.488660 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.489058 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.489074 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.489135 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.489284 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.489635 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.489641 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.489654 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.489657 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.498029 1101908 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-149000"
	W0717 20:05:25.498058 1101908 addons.go:240] addon default-storageclass should already be in state true
	I0717 20:05:25.498092 1101908 host.go:66] Checking if "old-k8s-version-149000" exists ...
	I0717 20:05:25.498485 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.498527 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.506931 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0717 20:05:25.507478 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.508080 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.508109 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.508562 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.508845 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.510969 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 20:05:25.513078 1101908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:05:25.511340 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0717 20:05:25.515599 1101908 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:05:25.515626 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 20:05:25.515655 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 20:05:25.516012 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.516682 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.516709 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.517198 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.517438 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.519920 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.520835 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 20:05:25.521176 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 20:05:25.521204 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.523226 1101908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 20:05:22.775399 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:25.278740 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:25.521305 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 20:05:25.523448 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38723
	I0717 20:05:25.525260 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 20:05:25.525280 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 20:05:25.525310 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 20:05:25.525529 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 20:05:25.526263 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 20:05:25.526597 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 20:05:25.527369 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.528329 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.528357 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.528696 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.528792 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.529350 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.529381 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.529649 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 20:05:25.529655 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 20:05:25.529674 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.529823 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 20:05:25.529949 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 20:05:25.530088 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 20:05:25.552954 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I0717 20:05:25.553470 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.554117 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.554145 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.554521 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.554831 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.556872 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 20:05:25.557158 1101908 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 20:05:25.557183 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 20:05:25.557204 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 20:05:25.560114 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.560622 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 20:05:25.560656 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.561095 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 20:05:25.561350 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 20:05:25.561512 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 20:05:25.561749 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 20:05:25.724163 1101908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:05:25.749198 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 20:05:25.749231 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 20:05:25.754533 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 20:05:25.757518 1101908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 20:05:25.811831 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 20:05:25.811867 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 20:05:25.893143 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:05:25.893175 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 20:05:25.994781 1101908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:05:26.019864 1101908 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-149000" context rescaled to 1 replicas
	I0717 20:05:26.019914 1101908 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.177 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:05:26.022777 1101908 out.go:177] * Verifying Kubernetes components...
	I0717 20:05:26.025694 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:05:27.100226 1101908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.376005593s)
	I0717 20:05:27.100282 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100295 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.100306 1101908 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.345727442s)
	I0717 20:05:27.100343 1101908 start.go:917] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0717 20:05:27.100360 1101908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.342808508s)
	I0717 20:05:27.100411 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100426 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.100781 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.100799 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.100810 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100821 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.100866 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.100877 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.100876 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.100885 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100894 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.101035 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.101065 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.101100 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.101154 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.101170 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.101185 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.101195 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.101423 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.101441 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.101448 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.169038 1101908 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.143298277s)
	I0717 20:05:27.169095 1101908 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-149000" to be "Ready" ...
	I0717 20:05:27.169044 1101908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.174211865s)
	I0717 20:05:27.169278 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.169333 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.169672 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.169782 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.169814 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.169837 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.169758 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.171950 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.171960 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.171979 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.171992 1101908 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-149000"
	I0717 20:05:27.174411 1101908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 20:05:27.777543 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:30.276174 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:27.176695 1101908 addons.go:502] enable addons completed in 1.716545434s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 20:05:27.191392 1101908 node_ready.go:49] node "old-k8s-version-149000" has status "Ready":"True"
	I0717 20:05:27.191435 1101908 node_ready.go:38] duration metric: took 22.324367ms waiting for node "old-k8s-version-149000" to be "Ready" ...
	I0717 20:05:27.191450 1101908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:05:27.203011 1101908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:29.214694 1101908 pod_ready.go:102] pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:31.215215 1101908 pod_ready.go:92] pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace has status "Ready":"True"
	I0717 20:05:31.215244 1101908 pod_ready.go:81] duration metric: took 4.012199031s waiting for pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:31.215265 1101908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t4mmh" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:31.222461 1101908 pod_ready.go:92] pod "kube-proxy-t4mmh" in "kube-system" namespace has status "Ready":"True"
	I0717 20:05:31.222489 1101908 pod_ready.go:81] duration metric: took 7.215944ms waiting for pod "kube-proxy-t4mmh" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:31.222504 1101908 pod_ready.go:38] duration metric: took 4.031041761s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:05:31.222530 1101908 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:05:31.222606 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:05:31.239450 1101908 api_server.go:72] duration metric: took 5.21948786s to wait for apiserver process to appear ...
	I0717 20:05:31.239494 1101908 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:05:31.239520 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 20:05:31.247985 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 200:
	ok
	I0717 20:05:31.249351 1101908 api_server.go:141] control plane version: v1.16.0
	I0717 20:05:31.249383 1101908 api_server.go:131] duration metric: took 9.880729ms to wait for apiserver health ...
	I0717 20:05:31.249391 1101908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:05:31.255025 1101908 system_pods.go:59] 4 kube-system pods found
	I0717 20:05:31.255062 1101908 system_pods.go:61] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.255069 1101908 system_pods.go:61] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.255076 1101908 system_pods.go:61] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.255086 1101908 system_pods.go:61] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.255095 1101908 system_pods.go:74] duration metric: took 5.697473ms to wait for pod list to return data ...
	I0717 20:05:31.255106 1101908 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:05:31.259740 1101908 default_sa.go:45] found service account: "default"
	I0717 20:05:31.259772 1101908 default_sa.go:55] duration metric: took 4.660789ms for default service account to be created ...
	I0717 20:05:31.259780 1101908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:05:31.264000 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:31.264044 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.264051 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.264081 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.264093 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.264116 1101908 retry.go:31] will retry after 269.941707ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:31.540816 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:31.540865 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.540876 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.540891 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.540922 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.540951 1101908 retry.go:31] will retry after 335.890023ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:32.287639 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:34.776299 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:31.881678 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:31.881721 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.881731 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.881742 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.881754 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.881778 1101908 retry.go:31] will retry after 452.6849ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:32.340889 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:32.340919 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:32.340924 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:32.340931 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:32.340938 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:32.340954 1101908 retry.go:31] will retry after 433.94285ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:32.780743 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:32.780777 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:32.780784 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:32.780795 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:32.780808 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:32.780830 1101908 retry.go:31] will retry after 664.997213ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:33.450870 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:33.450901 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:33.450906 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:33.450912 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:33.450919 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:33.450936 1101908 retry.go:31] will retry after 669.043592ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:34.126116 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:34.126155 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:34.126164 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:34.126177 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:34.126187 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:34.126207 1101908 retry.go:31] will retry after 799.422303ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:34.930555 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:34.930595 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:34.930604 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:34.930614 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:34.930624 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:34.930648 1101908 retry.go:31] will retry after 1.329879988s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:36.266531 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:36.266570 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:36.266578 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:36.266586 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:36.266596 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:36.266616 1101908 retry.go:31] will retry after 1.667039225s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:37.275872 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:39.776283 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:37.940699 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:37.940736 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:37.940746 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:37.940756 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:37.940768 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:37.940793 1101908 retry.go:31] will retry after 1.426011935s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:39.371704 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:39.371738 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:39.371743 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:39.371750 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:39.371757 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:39.371775 1101908 retry.go:31] will retry after 2.864830097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:42.276143 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:44.775621 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:42.241652 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:42.241693 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:42.241701 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:42.241713 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:42.241723 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:42.241744 1101908 retry.go:31] will retry after 2.785860959s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:45.034761 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:45.034793 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:45.034798 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:45.034806 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:45.034818 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:45.034839 1101908 retry.go:31] will retry after 3.037872313s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:46.776795 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:49.276343 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:48.078790 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:48.078826 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:48.078831 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:48.078842 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:48.078849 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:48.078867 1101908 retry.go:31] will retry after 4.546196458s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:51.777942 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:54.274279 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:52.631941 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:52.631986 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:52.631995 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:52.632006 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:52.632017 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:52.632043 1101908 retry.go:31] will retry after 6.391777088s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:56.276359 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:58.277520 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:59.036918 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:59.036951 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:59.036956 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:59.036963 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:59.036970 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:59.036988 1101908 retry.go:31] will retry after 5.758521304s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:06:00.776149 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:03.276291 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:05.276530 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:04.801914 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:06:04.801944 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:06:04.801950 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:06:04.801958 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:06:04.801965 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:06:04.801982 1101908 retry.go:31] will retry after 7.046104479s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:06:07.777447 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:10.275741 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:12.776577 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:14.776717 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:11.856116 1101908 system_pods.go:86] 8 kube-system pods found
	I0717 20:06:11.856165 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:06:11.856175 1101908 system_pods.go:89] "etcd-old-k8s-version-149000" [702c8e9f-d99a-4766-af97-550dc956f093] Pending
	I0717 20:06:11.856183 1101908 system_pods.go:89] "kube-apiserver-old-k8s-version-149000" [0f0c9817-f4c9-4266-b576-c270cea11b4b] Pending
	I0717 20:06:11.856191 1101908 system_pods.go:89] "kube-controller-manager-old-k8s-version-149000" [539db0c4-6e8c-42eb-9b73-686de5f6c7bf] Running
	I0717 20:06:11.856207 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:06:11.856216 1101908 system_pods.go:89] "kube-scheduler-old-k8s-version-149000" [5a27a0f7-c6c9-4324-a51c-d33c205d8724] Running
	I0717 20:06:11.856295 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:06:11.856308 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:06:11.856336 1101908 retry.go:31] will retry after 13.224383762s: missing components: etcd, kube-apiserver
	I0717 20:06:16.779816 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:19.275840 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:25.091227 1101908 system_pods.go:86] 8 kube-system pods found
	I0717 20:06:25.091272 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:06:25.091281 1101908 system_pods.go:89] "etcd-old-k8s-version-149000" [702c8e9f-d99a-4766-af97-550dc956f093] Running
	I0717 20:06:25.091288 1101908 system_pods.go:89] "kube-apiserver-old-k8s-version-149000" [0f0c9817-f4c9-4266-b576-c270cea11b4b] Running
	I0717 20:06:25.091298 1101908 system_pods.go:89] "kube-controller-manager-old-k8s-version-149000" [539db0c4-6e8c-42eb-9b73-686de5f6c7bf] Running
	I0717 20:06:25.091305 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:06:25.091312 1101908 system_pods.go:89] "kube-scheduler-old-k8s-version-149000" [5a27a0f7-c6c9-4324-a51c-d33c205d8724] Running
	I0717 20:06:25.091324 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:06:25.091337 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:06:25.091348 1101908 system_pods.go:126] duration metric: took 53.831561334s to wait for k8s-apps to be running ...
	I0717 20:06:25.091360 1101908 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:06:25.091455 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:06:25.119739 1101908 system_svc.go:56] duration metric: took 28.348212ms WaitForService to wait for kubelet.
	I0717 20:06:25.119804 1101908 kubeadm.go:581] duration metric: took 59.099852409s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:06:25.119854 1101908 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:06:25.123561 1101908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:06:25.123592 1101908 node_conditions.go:123] node cpu capacity is 2
	I0717 20:06:25.123606 1101908 node_conditions.go:105] duration metric: took 3.739793ms to run NodePressure ...
	I0717 20:06:25.123618 1101908 start.go:228] waiting for startup goroutines ...
	I0717 20:06:25.123624 1101908 start.go:233] waiting for cluster config update ...
	I0717 20:06:25.123669 1101908 start.go:242] writing updated cluster config ...
	I0717 20:06:25.124104 1101908 ssh_runner.go:195] Run: rm -f paused
	I0717 20:06:25.182838 1101908 start.go:578] kubectl: 1.27.3, cluster: 1.16.0 (minor skew: 11)
	I0717 20:06:25.185766 1101908 out.go:177] 
	W0717 20:06:25.188227 1101908 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.16.0.
	I0717 20:06:25.190452 1101908 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0717 20:06:25.192660 1101908 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-149000" cluster and "default" namespace by default
	I0717 20:06:21.776152 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:23.776276 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:25.781589 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:28.278450 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:30.775293 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:33.276069 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:35.775650 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:37.777006 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:40.275701 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:42.774969 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:44.775928 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:46.776363 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:48.786345 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:51.276618 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:53.776161 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:56.276037 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:58.276310 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:00.276357 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:02.775722 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:04.775945 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:07.280130 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:09.776589 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:12.277066 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:14.775525 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:17.275601 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:19.777143 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:22.286857 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:24.775908 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:26.779341 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:29.275732 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:31.276783 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:33.776286 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:36.274383 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:38.275384 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:40.775469 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:42.776331 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:44.776843 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:47.276067 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:49.276907 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:51.277652 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:53.776315 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:55.780034 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:58.276277 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:00.776903 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:03.276429 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:05.277182 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:07.776330 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:09.777528 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:12.275388 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:14.275926 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:16.776757 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:19.276466 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:21.276544 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:23.775888 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:25.778534 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:28.277897 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:30.775389 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:32.777134 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:34.777503 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:37.276492 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:39.775380 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:41.777135 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:44.276305 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:45.868652 1103141 pod_ready.go:81] duration metric: took 4m0.000895459s waiting for pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace to be "Ready" ...
	E0717 20:08:45.868703 1103141 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:08:45.868714 1103141 pod_ready.go:38] duration metric: took 4m2.373683506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:08:45.868742 1103141 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:08:45.868791 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:08:45.868907 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:08:45.926927 1103141 cri.go:89] found id: "b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:45.926965 1103141 cri.go:89] found id: ""
	I0717 20:08:45.926977 1103141 logs.go:284] 1 containers: [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f]
	I0717 20:08:45.927049 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:45.932247 1103141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:08:45.932335 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:08:45.976080 1103141 cri.go:89] found id: "6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:45.976176 1103141 cri.go:89] found id: ""
	I0717 20:08:45.976200 1103141 logs.go:284] 1 containers: [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8]
	I0717 20:08:45.976287 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:45.981650 1103141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:08:45.981738 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:08:46.017454 1103141 cri.go:89] found id: "9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:46.017487 1103141 cri.go:89] found id: ""
	I0717 20:08:46.017495 1103141 logs.go:284] 1 containers: [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e]
	I0717 20:08:46.017578 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.023282 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:08:46.023361 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:08:46.055969 1103141 cri.go:89] found id: "20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:46.055998 1103141 cri.go:89] found id: ""
	I0717 20:08:46.056009 1103141 logs.go:284] 1 containers: [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52]
	I0717 20:08:46.056063 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.061090 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:08:46.061181 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:08:46.094968 1103141 cri.go:89] found id: "c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:46.095001 1103141 cri.go:89] found id: ""
	I0717 20:08:46.095012 1103141 logs.go:284] 1 containers: [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39]
	I0717 20:08:46.095089 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.099940 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:08:46.100018 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:08:46.132535 1103141 cri.go:89] found id: "7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:46.132571 1103141 cri.go:89] found id: ""
	I0717 20:08:46.132586 1103141 logs.go:284] 1 containers: [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd]
	I0717 20:08:46.132655 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.138029 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:08:46.138112 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:08:46.179589 1103141 cri.go:89] found id: ""
	I0717 20:08:46.179620 1103141 logs.go:284] 0 containers: []
	W0717 20:08:46.179632 1103141 logs.go:286] No container was found matching "kindnet"
	I0717 20:08:46.179640 1103141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:08:46.179728 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:08:46.216615 1103141 cri.go:89] found id: "1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:46.216642 1103141 cri.go:89] found id: ""
	I0717 20:08:46.216650 1103141 logs.go:284] 1 containers: [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea]
	I0717 20:08:46.216782 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.223815 1103141 logs.go:123] Gathering logs for kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] ...
	I0717 20:08:46.223849 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:46.274046 1103141 logs.go:123] Gathering logs for kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] ...
	I0717 20:08:46.274093 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:46.314239 1103141 logs.go:123] Gathering logs for kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] ...
	I0717 20:08:46.314285 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:46.372521 1103141 logs.go:123] Gathering logs for kubelet ...
	I0717 20:08:46.372568 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:08:46.473516 1103141 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:08:46.473576 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:08:46.628553 1103141 logs.go:123] Gathering logs for coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] ...
	I0717 20:08:46.628626 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:46.663929 1103141 logs.go:123] Gathering logs for storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] ...
	I0717 20:08:46.663976 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:46.699494 1103141 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:08:46.699528 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:08:47.188357 1103141 logs.go:123] Gathering logs for container status ...
	I0717 20:08:47.188415 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:08:47.246863 1103141 logs.go:123] Gathering logs for dmesg ...
	I0717 20:08:47.246902 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:08:47.262383 1103141 logs.go:123] Gathering logs for kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] ...
	I0717 20:08:47.262418 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:47.315465 1103141 logs.go:123] Gathering logs for etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] ...
	I0717 20:08:47.315506 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:49.862911 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:08:49.880685 1103141 api_server.go:72] duration metric: took 4m9.416465331s to wait for apiserver process to appear ...
	I0717 20:08:49.880717 1103141 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:08:49.880763 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:08:49.880828 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:08:49.921832 1103141 cri.go:89] found id: "b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:49.921858 1103141 cri.go:89] found id: ""
	I0717 20:08:49.921867 1103141 logs.go:284] 1 containers: [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f]
	I0717 20:08:49.921922 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:49.927202 1103141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:08:49.927281 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:08:49.962760 1103141 cri.go:89] found id: "6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:49.962784 1103141 cri.go:89] found id: ""
	I0717 20:08:49.962793 1103141 logs.go:284] 1 containers: [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8]
	I0717 20:08:49.962850 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:49.968029 1103141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:08:49.968123 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:08:50.004191 1103141 cri.go:89] found id: "9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:50.004230 1103141 cri.go:89] found id: ""
	I0717 20:08:50.004239 1103141 logs.go:284] 1 containers: [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e]
	I0717 20:08:50.004308 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.009150 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:08:50.009223 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:08:50.041085 1103141 cri.go:89] found id: "20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:50.041109 1103141 cri.go:89] found id: ""
	I0717 20:08:50.041118 1103141 logs.go:284] 1 containers: [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52]
	I0717 20:08:50.041170 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.045541 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:08:50.045632 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:08:50.082404 1103141 cri.go:89] found id: "c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:50.082439 1103141 cri.go:89] found id: ""
	I0717 20:08:50.082448 1103141 logs.go:284] 1 containers: [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39]
	I0717 20:08:50.082510 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.087838 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:08:50.087928 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:08:50.130019 1103141 cri.go:89] found id: "7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:50.130053 1103141 cri.go:89] found id: ""
	I0717 20:08:50.130065 1103141 logs.go:284] 1 containers: [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd]
	I0717 20:08:50.130134 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.134894 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:08:50.134974 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:08:50.171033 1103141 cri.go:89] found id: ""
	I0717 20:08:50.171070 1103141 logs.go:284] 0 containers: []
	W0717 20:08:50.171081 1103141 logs.go:286] No container was found matching "kindnet"
	I0717 20:08:50.171088 1103141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:08:50.171158 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:08:50.206952 1103141 cri.go:89] found id: "1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:50.206984 1103141 cri.go:89] found id: ""
	I0717 20:08:50.206996 1103141 logs.go:284] 1 containers: [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea]
	I0717 20:08:50.207064 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.211123 1103141 logs.go:123] Gathering logs for etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] ...
	I0717 20:08:50.211152 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:50.257982 1103141 logs.go:123] Gathering logs for coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] ...
	I0717 20:08:50.258031 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:50.293315 1103141 logs.go:123] Gathering logs for kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] ...
	I0717 20:08:50.293371 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:50.343183 1103141 logs.go:123] Gathering logs for kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] ...
	I0717 20:08:50.343235 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:50.381821 1103141 logs.go:123] Gathering logs for kubelet ...
	I0717 20:08:50.381869 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:08:50.487833 1103141 logs.go:123] Gathering logs for dmesg ...
	I0717 20:08:50.487878 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:08:50.504213 1103141 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:08:50.504259 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:08:50.638194 1103141 logs.go:123] Gathering logs for kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] ...
	I0717 20:08:50.638230 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:50.685572 1103141 logs.go:123] Gathering logs for kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] ...
	I0717 20:08:50.685627 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:50.740133 1103141 logs.go:123] Gathering logs for storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] ...
	I0717 20:08:50.740188 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:50.778023 1103141 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:08:50.778059 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:08:51.310702 1103141 logs.go:123] Gathering logs for container status ...
	I0717 20:08:51.310758 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:08:53.857949 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 20:08:53.864729 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0717 20:08:53.866575 1103141 api_server.go:141] control plane version: v1.27.3
	I0717 20:08:53.866605 1103141 api_server.go:131] duration metric: took 3.985881495s to wait for apiserver health ...
	I0717 20:08:53.866613 1103141 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:08:53.866638 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:08:53.866687 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:08:53.902213 1103141 cri.go:89] found id: "b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:53.902243 1103141 cri.go:89] found id: ""
	I0717 20:08:53.902252 1103141 logs.go:284] 1 containers: [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f]
	I0717 20:08:53.902320 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:53.906976 1103141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:08:53.907073 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:08:53.946040 1103141 cri.go:89] found id: "6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:53.946063 1103141 cri.go:89] found id: ""
	I0717 20:08:53.946071 1103141 logs.go:284] 1 containers: [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8]
	I0717 20:08:53.946150 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:53.951893 1103141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:08:53.951963 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:08:53.988546 1103141 cri.go:89] found id: "9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:53.988583 1103141 cri.go:89] found id: ""
	I0717 20:08:53.988594 1103141 logs.go:284] 1 containers: [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e]
	I0717 20:08:53.988647 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:53.994338 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:08:53.994428 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:08:54.030092 1103141 cri.go:89] found id: "20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:54.030123 1103141 cri.go:89] found id: ""
	I0717 20:08:54.030133 1103141 logs.go:284] 1 containers: [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52]
	I0717 20:08:54.030198 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.035081 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:08:54.035189 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:08:54.069845 1103141 cri.go:89] found id: "c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:54.069878 1103141 cri.go:89] found id: ""
	I0717 20:08:54.069889 1103141 logs.go:284] 1 containers: [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39]
	I0717 20:08:54.069952 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.075257 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:08:54.075334 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:08:54.114477 1103141 cri.go:89] found id: "7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:54.114516 1103141 cri.go:89] found id: ""
	I0717 20:08:54.114527 1103141 logs.go:284] 1 containers: [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd]
	I0717 20:08:54.114602 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.119374 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:08:54.119464 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:08:54.160628 1103141 cri.go:89] found id: ""
	I0717 20:08:54.160660 1103141 logs.go:284] 0 containers: []
	W0717 20:08:54.160672 1103141 logs.go:286] No container was found matching "kindnet"
	I0717 20:08:54.160680 1103141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:08:54.160752 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:08:54.200535 1103141 cri.go:89] found id: "1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:54.200662 1103141 cri.go:89] found id: ""
	I0717 20:08:54.200674 1103141 logs.go:284] 1 containers: [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea]
	I0717 20:08:54.200736 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.205923 1103141 logs.go:123] Gathering logs for dmesg ...
	I0717 20:08:54.205958 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:08:54.221020 1103141 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:08:54.221057 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:08:54.381122 1103141 logs.go:123] Gathering logs for coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] ...
	I0717 20:08:54.381163 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:54.417207 1103141 logs.go:123] Gathering logs for kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] ...
	I0717 20:08:54.417255 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:54.469346 1103141 logs.go:123] Gathering logs for kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] ...
	I0717 20:08:54.469389 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:54.513216 1103141 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:08:54.513258 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:08:55.056597 1103141 logs.go:123] Gathering logs for kubelet ...
	I0717 20:08:55.056644 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:08:55.168622 1103141 logs.go:123] Gathering logs for kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] ...
	I0717 20:08:55.168669 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:55.220979 1103141 logs.go:123] Gathering logs for etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] ...
	I0717 20:08:55.221038 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:55.264086 1103141 logs.go:123] Gathering logs for kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] ...
	I0717 20:08:55.264124 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:55.317931 1103141 logs.go:123] Gathering logs for storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] ...
	I0717 20:08:55.317974 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:55.357733 1103141 logs.go:123] Gathering logs for container status ...
	I0717 20:08:55.357770 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:08:57.919739 1103141 system_pods.go:59] 8 kube-system pods found
	I0717 20:08:57.919785 1103141 system_pods.go:61] "coredns-5d78c9869d-gq2b2" [833e67fa-16e2-4a5c-8c39-16cc4fbd411e] Running
	I0717 20:08:57.919795 1103141 system_pods.go:61] "etcd-embed-certs-114855" [7209c449-fbf1-4343-8636-e872684db832] Running
	I0717 20:08:57.919808 1103141 system_pods.go:61] "kube-apiserver-embed-certs-114855" [d926dfc1-71e8-44cb-9efe-4c37e0982b02] Running
	I0717 20:08:57.919817 1103141 system_pods.go:61] "kube-controller-manager-embed-certs-114855" [e16de906-3b66-4882-83ca-8d5476d45d96] Running
	I0717 20:08:57.919823 1103141 system_pods.go:61] "kube-proxy-bfvnl" [6f7fb55d-fa9f-4d08-b4ab-3814af550c01] Running
	I0717 20:08:57.919830 1103141 system_pods.go:61] "kube-scheduler-embed-certs-114855" [828c7a2f-dd4b-4318-8199-026970bb3159] Running
	I0717 20:08:57.919850 1103141 system_pods.go:61] "metrics-server-74d5c6b9c-jvfz8" [f861e320-9125-4081-b043-c90d8b027f71] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:08:57.919859 1103141 system_pods.go:61] "storage-provisioner" [994ec0db-08aa-4dd5-a137-1f6984051e65] Running
	I0717 20:08:57.919866 1103141 system_pods.go:74] duration metric: took 4.053247674s to wait for pod list to return data ...
	I0717 20:08:57.919876 1103141 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:08:57.925726 1103141 default_sa.go:45] found service account: "default"
	I0717 20:08:57.925756 1103141 default_sa.go:55] duration metric: took 5.874288ms for default service account to be created ...
	I0717 20:08:57.925765 1103141 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:08:57.934835 1103141 system_pods.go:86] 8 kube-system pods found
	I0717 20:08:57.934869 1103141 system_pods.go:89] "coredns-5d78c9869d-gq2b2" [833e67fa-16e2-4a5c-8c39-16cc4fbd411e] Running
	I0717 20:08:57.934875 1103141 system_pods.go:89] "etcd-embed-certs-114855" [7209c449-fbf1-4343-8636-e872684db832] Running
	I0717 20:08:57.934880 1103141 system_pods.go:89] "kube-apiserver-embed-certs-114855" [d926dfc1-71e8-44cb-9efe-4c37e0982b02] Running
	I0717 20:08:57.934886 1103141 system_pods.go:89] "kube-controller-manager-embed-certs-114855" [e16de906-3b66-4882-83ca-8d5476d45d96] Running
	I0717 20:08:57.934890 1103141 system_pods.go:89] "kube-proxy-bfvnl" [6f7fb55d-fa9f-4d08-b4ab-3814af550c01] Running
	I0717 20:08:57.934894 1103141 system_pods.go:89] "kube-scheduler-embed-certs-114855" [828c7a2f-dd4b-4318-8199-026970bb3159] Running
	I0717 20:08:57.934903 1103141 system_pods.go:89] "metrics-server-74d5c6b9c-jvfz8" [f861e320-9125-4081-b043-c90d8b027f71] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:08:57.934908 1103141 system_pods.go:89] "storage-provisioner" [994ec0db-08aa-4dd5-a137-1f6984051e65] Running
	I0717 20:08:57.934917 1103141 system_pods.go:126] duration metric: took 9.146607ms to wait for k8s-apps to be running ...
	I0717 20:08:57.934924 1103141 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:08:57.934972 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:08:57.952480 1103141 system_svc.go:56] duration metric: took 17.537719ms WaitForService to wait for kubelet.
	I0717 20:08:57.952531 1103141 kubeadm.go:581] duration metric: took 4m17.48831739s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:08:57.952581 1103141 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:08:57.956510 1103141 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:08:57.956581 1103141 node_conditions.go:123] node cpu capacity is 2
	I0717 20:08:57.956599 1103141 node_conditions.go:105] duration metric: took 4.010178ms to run NodePressure ...
	I0717 20:08:57.956633 1103141 start.go:228] waiting for startup goroutines ...
	I0717 20:08:57.956646 1103141 start.go:233] waiting for cluster config update ...
	I0717 20:08:57.956665 1103141 start.go:242] writing updated cluster config ...
	I0717 20:08:57.957107 1103141 ssh_runner.go:195] Run: rm -f paused
	I0717 20:08:58.016891 1103141 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:08:58.019566 1103141 out.go:177] * Done! kubectl is now configured to use "embed-certs-114855" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 19:58:06 UTC, ends at Mon 2023-07-17 20:12:26 UTC. --
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.430745837Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=77fc20ea-d7ee-4105-b353-f30cf4c55dba name=/runtime.v1.RuntimeService/Status
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.565288797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=96929bc2-567b-4f8d-877e-1634a952534c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.565438942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=96929bc2-567b-4f8d-877e-1634a952534c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.565660515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=96929bc2-567b-4f8d-877e-1634a952534c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.603632915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=57b33827-5cab-427c-a5d0-de38edbd0fe7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.603728182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=57b33827-5cab-427c-a5d0-de38edbd0fe7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.603980533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=57b33827-5cab-427c-a5d0-de38edbd0fe7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.643928012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=df698c47-8c59-4b78-8af2-b5252dca9069 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.644053423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=df698c47-8c59-4b78-8af2-b5252dca9069 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.644503230Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=df698c47-8c59-4b78-8af2-b5252dca9069 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.686911390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f7518cec-f1ec-4e79-89f2-8133dbef0c8e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.686979891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f7518cec-f1ec-4e79-89f2-8133dbef0c8e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.687825011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f7518cec-f1ec-4e79-89f2-8133dbef0c8e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.732176616Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4768f3e8-6121-495d-a875-c13fbe234427 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.732277430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4768f3e8-6121-495d-a875-c13fbe234427 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.732599443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4768f3e8-6121-495d-a875-c13fbe234427 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.773154879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0ba8772d-c738-4cc0-b1eb-b91ec07c195e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.773309104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0ba8772d-c738-4cc0-b1eb-b91ec07c195e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.773705346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0ba8772d-c738-4cc0-b1eb-b91ec07c195e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.814279715Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ac3c6877-74da-4fe7-a6dc-333026a0baf7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.814508139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ac3c6877-74da-4fe7-a6dc-333026a0baf7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.814735586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ac3c6877-74da-4fe7-a6dc-333026a0baf7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.858253576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4bb2957f-9dd4-4dd4-856e-d252f55ca2c6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.858380617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4bb2957f-9dd4-4dd4-856e-d252f55ca2c6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:26 no-preload-408472 crio[723]: time="2023-07-17 20:12:26.858673148Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4bb2957f-9dd4-4dd4-856e-d252f55ca2c6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	434d3b3c5d986       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   ff5ad5d7dd32f
	1f1dd1f8bfc6e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   f3cb758549447
	63dc2a3f8ace5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   a33bbe47c7157
	c8746d568c4d0       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      13 minutes ago      Running             kube-proxy                1                   47139040a14d6
	cb2ddc8935dcd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   ff5ad5d7dd32f
	4a90287e5fc16       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      13 minutes ago      Running             etcd                      1                   5b4b09ff53722
	2ba1ed857458d       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      13 minutes ago      Running             kube-controller-manager   1                   57e1bf1c090b7
	0db29fec08ce9       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      13 minutes ago      Running             kube-scheduler            1                   4be199ed8e485
	eec27ef53d6bc       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      13 minutes ago      Running             kube-apiserver            1                   37d2b34c5db08
	
	* 
	* ==> coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53754 - 13599 "HINFO IN 595208134056070901.5201549753386648626. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.028425985s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-408472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-408472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=no-preload-408472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T19_49_48_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:49:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-408472
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 20:12:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 20:09:40 +0000   Mon, 17 Jul 2023 19:49:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 20:09:40 +0000   Mon, 17 Jul 2023 19:49:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 20:09:40 +0000   Mon, 17 Jul 2023 19:49:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 20:09:40 +0000   Mon, 17 Jul 2023 19:59:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.65
	  Hostname:    no-preload-408472
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 4af696079b3a42e08bf5e45b6c9af525
	  System UUID:                4af69607-9b3a-42e0-8bf5-e45b6c9af525
	  Boot ID:                    ad4ad896-f9d0-475d-9d7f-ee3c3d9b501b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5d78c9869d-9mxdj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-no-preload-408472                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-no-preload-408472             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-no-preload-408472    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-cntdn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-no-preload-408472             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-74d5c6b9c-hnngh               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-408472 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-408472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-408472 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                22m                kubelet          Node no-preload-408472 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-408472 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-408472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-408472 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-408472 event: Registered Node no-preload-408472 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-408472 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-408472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-408472 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-408472 event: Registered Node no-preload-408472 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul17 19:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073696] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul17 19:58] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.604245] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.146173] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.515395] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.291847] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.118209] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.165153] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.111246] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.250866] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[ +31.372713] systemd-fstab-generator[1239]: Ignoring "noauto" for root device
	[Jul17 19:59] kauditd_printk_skb: 29 callbacks suppressed
	
	* 
	* ==> etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] <==
	* {"level":"info","ts":"2023-07-17T19:59:11.445Z","caller":"traceutil/trace.go:171","msg":"trace[1034300963] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"310.239421ms","start":"2023-07-17T19:59:11.135Z","end":"2023-07-17T19:59:11.445Z","steps":["trace[1034300963] 'process raft request'  (duration: 306.655549ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.445Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.135Z","time spent":"310.290721ms","remote":"127.0.0.1:56132","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":987,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/metrics-server\" mod_revision:512 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/metrics-server\" value_size:924 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > >"}
	{"level":"info","ts":"2023-07-17T19:59:11.446Z","caller":"traceutil/trace.go:171","msg":"trace[1344318666] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"305.176564ms","start":"2023-07-17T19:59:11.141Z","end":"2023-07-17T19:59:11.446Z","steps":["trace[1344318666] 'process raft request'  (duration: 301.271819ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.446Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.141Z","time spent":"305.274495ms","remote":"127.0.0.1:56200","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:607 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2023-07-17T19:59:11.451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"315.827526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" ","response":"range_response_count:1 size:6343"}
	{"level":"info","ts":"2023-07-17T19:59:11.451Z","caller":"traceutil/trace.go:171","msg":"trace[842066160] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-no-preload-408472; range_end:; response_count:1; response_revision:612; }","duration":"315.978161ms","start":"2023-07-17T19:59:11.135Z","end":"2023-07-17T19:59:11.451Z","steps":["trace[842066160] 'agreement among raft nodes before linearized reading'  (duration: 315.739367ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.451Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.135Z","time spent":"316.213528ms","remote":"127.0.0.1:56136","response type":"/etcdserverpb.KV/Range","request count":0,"request size":70,"response count":1,"response size":6366,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" "}
	{"level":"warn","ts":"2023-07-17T19:59:11.451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.005177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5182"}
	{"level":"info","ts":"2023-07-17T19:59:11.451Z","caller":"traceutil/trace.go:171","msg":"trace[1619904632] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:612; }","duration":"305.084461ms","start":"2023-07-17T19:59:11.146Z","end":"2023-07-17T19:59:11.451Z","steps":["trace[1619904632] 'agreement among raft nodes before linearized reading'  (duration: 304.96695ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.452Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.146Z","time spent":"305.155743ms","remote":"127.0.0.1:56200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":5205,"request content":"key:\"/registry/deployments/kube-system/metrics-server\" "}
	{"level":"warn","ts":"2023-07-17T19:59:11.452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.373749ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4133"}
	{"level":"info","ts":"2023-07-17T19:59:11.452Z","caller":"traceutil/trace.go:171","msg":"trace[400271151] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:612; }","duration":"305.530541ms","start":"2023-07-17T19:59:11.146Z","end":"2023-07-17T19:59:11.452Z","steps":["trace[400271151] 'agreement among raft nodes before linearized reading'  (duration: 305.322037ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.452Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.146Z","time spent":"305.574073ms","remote":"127.0.0.1:56200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4156,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"warn","ts":"2023-07-17T19:59:11.452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.521193ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-no-preload-408472\" ","response":"range_response_count:1 size:6727"}
	{"level":"info","ts":"2023-07-17T19:59:11.452Z","caller":"traceutil/trace.go:171","msg":"trace[575452763] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-no-preload-408472; range_end:; response_count:1; response_revision:612; }","duration":"149.584784ms","start":"2023-07-17T19:59:11.303Z","end":"2023-07-17T19:59:11.452Z","steps":["trace[575452763] 'agreement among raft nodes before linearized reading'  (duration: 149.48203ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.879Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.160787ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15486904287370853960 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" mod_revision:520 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" value_size:6252 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-07-17T19:59:11.879Z","caller":"traceutil/trace.go:171","msg":"trace[387970279] linearizableReadLoop","detail":"{readStateIndex:656; appliedIndex:655; }","duration":"396.454267ms","start":"2023-07-17T19:59:11.483Z","end":"2023-07-17T19:59:11.879Z","steps":["trace[387970279] 'read index received'  (duration: 236.277283ms)","trace[387970279] 'applied index is now lower than readState.Index'  (duration: 160.175907ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T19:59:11.879Z","caller":"traceutil/trace.go:171","msg":"trace[1839751726] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"398.595379ms","start":"2023-07-17T19:59:11.481Z","end":"2023-07-17T19:59:11.879Z","steps":["trace[1839751726] 'process raft request'  (duration: 238.042921ms)","trace[1839751726] 'compare'  (duration: 159.987931ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T19:59:11.880Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.481Z","time spent":"398.657132ms","remote":"127.0.0.1:56136","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6328,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" mod_revision:520 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" value_size:6252 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" > >"}
	{"level":"warn","ts":"2023-07-17T19:59:11.880Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"396.944796ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T19:59:11.880Z","caller":"traceutil/trace.go:171","msg":"trace[2087394543] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:613; }","duration":"396.968671ms","start":"2023-07-17T19:59:11.483Z","end":"2023-07-17T19:59:11.880Z","steps":["trace[2087394543] 'agreement among raft nodes before linearized reading'  (duration: 396.90471ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.880Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.483Z","time spent":"397.002165ms","remote":"127.0.0.1:56098","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-07-17T20:08:56.303Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":860}
	{"level":"info","ts":"2023-07-17T20:08:56.306Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":860,"took":"2.515863ms","hash":3076115344}
	{"level":"info","ts":"2023-07-17T20:08:56.306Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3076115344,"revision":860,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  20:12:27 up 14 min,  0 users,  load average: 0.03, 0.13, 0.15
	Linux no-preload-408472 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] <==
	* E0717 20:08:59.092342       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:08:59.092358       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0717 20:08:59.092238       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:08:59.093684       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:09:57.945355       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.153.136:443: connect: connection refused
	I0717 20:09:57.945500       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 20:09:59.093009       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:09:59.093164       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:09:59.093202       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 20:09:59.094200       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:09:59.094258       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:09:59.094265       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:10:57.945301       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.153.136:443: connect: connection refused
	I0717 20:10:57.945375       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 20:11:57.945355       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.153.136:443: connect: connection refused
	I0717 20:11:57.945700       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 20:11:59.093583       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:11:59.093848       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:11:59.093901       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 20:11:59.094760       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:11:59.094838       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:11:59.095914       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] <==
	* W0717 20:06:11.240324       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:06:40.806084       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:06:41.254122       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:07:10.812225       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:07:11.264738       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:07:40.818536       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:07:41.275348       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:08:10.826541       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:08:11.284050       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:08:40.833262       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:08:41.293983       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:09:10.840356       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:09:11.303895       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:09:40.847157       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:09:41.312992       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:10:10.854890       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:10:11.324857       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:10:40.861318       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:10:41.335828       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:11:10.867490       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:11:11.345180       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:11:40.874116       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:11:41.354028       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:12:10.880913       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:12:11.365087       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] <==
	* I0717 19:59:00.609142       1 node.go:141] Successfully retrieved node IP: 192.168.61.65
	I0717 19:59:00.609316       1 server_others.go:110] "Detected node IP" address="192.168.61.65"
	I0717 19:59:00.609349       1 server_others.go:554] "Using iptables proxy"
	I0717 19:59:00.649064       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 19:59:00.649143       1 server_others.go:192] "Using iptables Proxier"
	I0717 19:59:00.649186       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:59:00.649775       1 server.go:658] "Version info" version="v1.27.3"
	I0717 19:59:00.650023       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:59:00.651356       1 config.go:188] "Starting service config controller"
	I0717 19:59:00.651553       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 19:59:00.651599       1 config.go:97] "Starting endpoint slice config controller"
	I0717 19:59:00.651625       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 19:59:00.653297       1 config.go:315] "Starting node config controller"
	I0717 19:59:00.653344       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 19:59:00.751849       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 19:59:00.751916       1 shared_informer.go:318] Caches are synced for service config
	I0717 19:59:00.754332       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] <==
	* I0717 19:58:55.114636       1 serving.go:348] Generated self-signed cert in-memory
	W0717 19:58:57.977031       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 19:58:57.977110       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:58:57.977140       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 19:58:57.977164       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 19:58:58.031371       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 19:58:58.034846       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:58:58.038624       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:58:58.038684       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:58:58.039661       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 19:58:58.039744       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 19:58:58.240122       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 19:58:06 UTC, ends at Mon 2023-07-17 20:12:27 UTC. --
	Jul 17 20:09:51 no-preload-408472 kubelet[1245]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:09:54 no-preload-408472 kubelet[1245]: E0717 20:09:54.199269    1245 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 20:09:54 no-preload-408472 kubelet[1245]: E0717 20:09:54.199327    1245 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 20:09:54 no-preload-408472 kubelet[1245]: E0717 20:09:54.199661    1245 kuberuntime_manager.go:1212] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5bwj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod metrics-server-74d5c6b9c-hnngh_kube-system(dfff837e-dbba-4795-935d-9562d2744169): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 20:09:54 no-preload-408472 kubelet[1245]: E0717 20:09:54.199718    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:10:07 no-preload-408472 kubelet[1245]: E0717 20:10:07.182548    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:10:21 no-preload-408472 kubelet[1245]: E0717 20:10:21.183925    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:10:32 no-preload-408472 kubelet[1245]: E0717 20:10:32.181983    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:10:45 no-preload-408472 kubelet[1245]: E0717 20:10:45.181996    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:10:51 no-preload-408472 kubelet[1245]: E0717 20:10:51.207706    1245 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:10:51 no-preload-408472 kubelet[1245]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:10:51 no-preload-408472 kubelet[1245]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:10:51 no-preload-408472 kubelet[1245]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:10:58 no-preload-408472 kubelet[1245]: E0717 20:10:58.181375    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:11:09 no-preload-408472 kubelet[1245]: E0717 20:11:09.181988    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:11:24 no-preload-408472 kubelet[1245]: E0717 20:11:24.181367    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:11:39 no-preload-408472 kubelet[1245]: E0717 20:11:39.182610    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:11:50 no-preload-408472 kubelet[1245]: E0717 20:11:50.182720    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:11:51 no-preload-408472 kubelet[1245]: E0717 20:11:51.200772    1245 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:11:51 no-preload-408472 kubelet[1245]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:11:51 no-preload-408472 kubelet[1245]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:11:51 no-preload-408472 kubelet[1245]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:12:01 no-preload-408472 kubelet[1245]: E0717 20:12:01.182086    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:12:14 no-preload-408472 kubelet[1245]: E0717 20:12:14.181642    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:12:26 no-preload-408472 kubelet[1245]: E0717 20:12:26.181905    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	
	* 
	* ==> storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] <==
	* I0717 19:59:31.729050       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:59:31.751948       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:59:31.752229       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 19:59:49.160556       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 19:59:49.161290       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-408472_25383115-a75d-491b-ab63-40bb6346fdc9!
	I0717 19:59:49.163828       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59460376-84b8-4c43-8c5e-9241ae256687", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-408472_25383115-a75d-491b-ab63-40bb6346fdc9 became leader
	I0717 19:59:49.263720       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-408472_25383115-a75d-491b-ab63-40bb6346fdc9!
	
	* 
	* ==> storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] <==
	* I0717 19:59:00.490867       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 19:59:30.495953       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-408472 -n no-preload-408472
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-408472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-hnngh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-408472 describe pod metrics-server-74d5c6b9c-hnngh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-408472 describe pod metrics-server-74d5c6b9c-hnngh: exit status 1 (78.987305ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-hnngh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-408472 describe pod metrics-server-74d5c6b9c-hnngh: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 20:06:00.133875 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 20:06:03.520008 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-711413 -n default-k8s-diff-port-711413
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-07-17 20:12:30.056293926 +0000 UTC m=+5351.364989471
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-711413 -n default-k8s-diff-port-711413
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-711413 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-711413 logs -n 25: (1.860939908s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-408472             | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:50 UTC | 17 Jul 23 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-408472                                   | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-711413  | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC |                     |
	|         | default-k8s-diff-port-711413                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-891260             | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-891260                  | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-891260 --memory=2200 --alsologtostderr   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-891260 sudo                              | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p                                                     | disable-driver-mounts-178387 | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | disable-driver-mounts-178387                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-149000             | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-149000                              | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-408472                  | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-408472                                   | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-711413       | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 20:03 UTC |
	|         | default-k8s-diff-port-711413                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-114855            | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 19:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-114855                 | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC | 17 Jul 23 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 19:57:15
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:57:15.731358 1103141 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:57:15.731568 1103141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:57:15.731580 1103141 out.go:309] Setting ErrFile to fd 2...
	I0717 19:57:15.731587 1103141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:57:15.731815 1103141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:57:15.732432 1103141 out.go:303] Setting JSON to false
	I0717 19:57:15.733539 1103141 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16787,"bootTime":1689607049,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:57:15.733642 1103141 start.go:138] virtualization: kvm guest
	I0717 19:57:15.737317 1103141 out.go:177] * [embed-certs-114855] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:57:15.739399 1103141 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:57:15.739429 1103141 notify.go:220] Checking for updates...
	I0717 19:57:15.741380 1103141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:57:15.743518 1103141 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:57:15.745436 1103141 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:57:15.747588 1103141 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:57:15.749399 1103141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:57:15.751806 1103141 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:57:15.752284 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:57:15.752344 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:57:15.767989 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42921
	I0717 19:57:15.768411 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:57:15.769006 1103141 main.go:141] libmachine: Using API Version  1
	I0717 19:57:15.769098 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:57:15.769495 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:57:15.769753 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:57:15.770054 1103141 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:57:15.770369 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:57:15.770414 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:57:15.785632 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40597
	I0717 19:57:15.786193 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:57:15.786746 1103141 main.go:141] libmachine: Using API Version  1
	I0717 19:57:15.786780 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:57:15.787144 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:57:15.787366 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:57:15.827764 1103141 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:57:15.829847 1103141 start.go:298] selected driver: kvm2
	I0717 19:57:15.829881 1103141 start.go:880] validating driver "kvm2" against &{Name:embed-certs-114855 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-11
4855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStrin
g:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:57:15.830064 1103141 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:57:15.830818 1103141 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:57:15.830919 1103141 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:57:15.846540 1103141 install.go:137] /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2 version is 1.30.1
	I0717 19:57:15.846983 1103141 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:57:15.847033 1103141 cni.go:84] Creating CNI manager for ""
	I0717 19:57:15.847067 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:57:15.847081 1103141 start_flags.go:319] config:
	{Name:embed-certs-114855 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-114855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:57:15.847306 1103141 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:57:15.849943 1103141 out.go:177] * Starting control plane node embed-certs-114855 in cluster embed-certs-114855
	I0717 19:57:14.309967 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:15.851794 1103141 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:57:15.851858 1103141 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 19:57:15.851874 1103141 cache.go:57] Caching tarball of preloaded images
	I0717 19:57:15.851987 1103141 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:57:15.852001 1103141 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:57:15.852143 1103141 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/config.json ...
	I0717 19:57:15.852383 1103141 start.go:365] acquiring machines lock for embed-certs-114855: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:57:17.381986 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:23.461901 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:26.533953 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:32.613932 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:35.685977 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:41.765852 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:44.837869 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:50.917965 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:53.921775 1102136 start.go:369] acquired machines lock for "no-preload-408472" in 4m25.126407357s
	I0717 19:57:53.921838 1102136 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:57:53.921845 1102136 fix.go:54] fixHost starting: 
	I0717 19:57:53.922267 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:57:53.922309 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:57:53.937619 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0717 19:57:53.938191 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:57:53.938815 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:57:53.938854 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:57:53.939222 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:57:53.939501 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:57:53.939704 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:57:53.941674 1102136 fix.go:102] recreateIfNeeded on no-preload-408472: state=Stopped err=<nil>
	I0717 19:57:53.941732 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	W0717 19:57:53.941961 1102136 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:57:53.944840 1102136 out.go:177] * Restarting existing kvm2 VM for "no-preload-408472" ...
	I0717 19:57:53.919175 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:57:53.919232 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:57:53.921597 1101908 machine.go:91] provisioned docker machine in 4m37.562634254s
	I0717 19:57:53.921653 1101908 fix.go:56] fixHost completed within 4m37.5908464s
	I0717 19:57:53.921659 1101908 start.go:83] releasing machines lock for "old-k8s-version-149000", held for 4m37.590895645s
	W0717 19:57:53.921680 1101908 start.go:688] error starting host: provision: host is not running
	W0717 19:57:53.921815 1101908 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 19:57:53.921826 1101908 start.go:703] Will try again in 5 seconds ...
	I0717 19:57:53.947202 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Start
	I0717 19:57:53.947561 1102136 main.go:141] libmachine: (no-preload-408472) Ensuring networks are active...
	I0717 19:57:53.948787 1102136 main.go:141] libmachine: (no-preload-408472) Ensuring network default is active
	I0717 19:57:53.949254 1102136 main.go:141] libmachine: (no-preload-408472) Ensuring network mk-no-preload-408472 is active
	I0717 19:57:53.949695 1102136 main.go:141] libmachine: (no-preload-408472) Getting domain xml...
	I0717 19:57:53.950763 1102136 main.go:141] libmachine: (no-preload-408472) Creating domain...
	I0717 19:57:55.256278 1102136 main.go:141] libmachine: (no-preload-408472) Waiting to get IP...
	I0717 19:57:55.257164 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:55.257506 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:55.257619 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:55.257495 1103281 retry.go:31] will retry after 210.861865ms: waiting for machine to come up
	I0717 19:57:55.470210 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:55.470771 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:55.470798 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:55.470699 1103281 retry.go:31] will retry after 348.064579ms: waiting for machine to come up
	I0717 19:57:55.820645 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:55.821335 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:55.821366 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:55.821251 1103281 retry.go:31] will retry after 340.460253ms: waiting for machine to come up
	I0717 19:57:56.163913 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:56.164380 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:56.164412 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:56.164331 1103281 retry.go:31] will retry after 551.813243ms: waiting for machine to come up
	I0717 19:57:56.718505 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:56.719004 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:56.719034 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:56.718953 1103281 retry.go:31] will retry after 640.277548ms: waiting for machine to come up
	I0717 19:57:57.360930 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:57.361456 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:57.361485 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:57.361395 1103281 retry.go:31] will retry after 590.296988ms: waiting for machine to come up
	I0717 19:57:57.953399 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:57.953886 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:57.953913 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:57.953811 1103281 retry.go:31] will retry after 884.386688ms: waiting for machine to come up
	I0717 19:57:58.923546 1101908 start.go:365] acquiring machines lock for old-k8s-version-149000: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:57:58.840158 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:58.840577 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:58.840610 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:58.840529 1103281 retry.go:31] will retry after 1.10470212s: waiting for machine to come up
	I0717 19:57:59.947457 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:59.947973 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:59.948001 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:59.947933 1103281 retry.go:31] will retry after 1.338375271s: waiting for machine to come up
	I0717 19:58:01.288616 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:01.289194 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:01.289226 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:01.289133 1103281 retry.go:31] will retry after 1.633127486s: waiting for machine to come up
	I0717 19:58:02.923621 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:02.924330 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:02.924365 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:02.924253 1103281 retry.go:31] will retry after 2.365924601s: waiting for machine to come up
	I0717 19:58:05.291979 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:05.292487 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:05.292519 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:05.292430 1103281 retry.go:31] will retry after 2.846623941s: waiting for machine to come up
	I0717 19:58:08.142536 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:08.143021 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:08.143050 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:08.142961 1103281 retry.go:31] will retry after 3.495052949s: waiting for machine to come up
	I0717 19:58:11.641858 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:11.642358 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:11.642384 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:11.642302 1103281 retry.go:31] will retry after 5.256158303s: waiting for machine to come up
	I0717 19:58:18.263277 1102415 start.go:369] acquired machines lock for "default-k8s-diff-port-711413" in 4m14.158154198s
	I0717 19:58:18.263342 1102415 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:58:18.263362 1102415 fix.go:54] fixHost starting: 
	I0717 19:58:18.263897 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:58:18.263950 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:58:18.280719 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33059
	I0717 19:58:18.281241 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:58:18.281819 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:58:18.281845 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:58:18.282261 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:58:18.282489 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:18.282657 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:58:18.284625 1102415 fix.go:102] recreateIfNeeded on default-k8s-diff-port-711413: state=Stopped err=<nil>
	I0717 19:58:18.284655 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	W0717 19:58:18.284839 1102415 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:58:18.288135 1102415 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-711413" ...
	I0717 19:58:16.902597 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.903197 1102136 main.go:141] libmachine: (no-preload-408472) Found IP for machine: 192.168.61.65
	I0717 19:58:16.903226 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has current primary IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.903232 1102136 main.go:141] libmachine: (no-preload-408472) Reserving static IP address...
	I0717 19:58:16.903758 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "no-preload-408472", mac: "52:54:00:36:75:ac", ip: "192.168.61.65"} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:16.903794 1102136 main.go:141] libmachine: (no-preload-408472) Reserved static IP address: 192.168.61.65
	I0717 19:58:16.903806 1102136 main.go:141] libmachine: (no-preload-408472) DBG | skip adding static IP to network mk-no-preload-408472 - found existing host DHCP lease matching {name: "no-preload-408472", mac: "52:54:00:36:75:ac", ip: "192.168.61.65"}
	I0717 19:58:16.903820 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Getting to WaitForSSH function...
	I0717 19:58:16.903830 1102136 main.go:141] libmachine: (no-preload-408472) Waiting for SSH to be available...
	I0717 19:58:16.906385 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.906796 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:16.906833 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.906966 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Using SSH client type: external
	I0717 19:58:16.907000 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa (-rw-------)
	I0717 19:58:16.907034 1102136 main.go:141] libmachine: (no-preload-408472) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:58:16.907056 1102136 main.go:141] libmachine: (no-preload-408472) DBG | About to run SSH command:
	I0717 19:58:16.907116 1102136 main.go:141] libmachine: (no-preload-408472) DBG | exit 0
	I0717 19:58:16.998306 1102136 main.go:141] libmachine: (no-preload-408472) DBG | SSH cmd err, output: <nil>: 
	I0717 19:58:16.998744 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetConfigRaw
	I0717 19:58:16.999490 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:17.002697 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.003108 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.003156 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.003405 1102136 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/config.json ...
	I0717 19:58:17.003642 1102136 machine.go:88] provisioning docker machine ...
	I0717 19:58:17.003668 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:17.003989 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetMachineName
	I0717 19:58:17.004208 1102136 buildroot.go:166] provisioning hostname "no-preload-408472"
	I0717 19:58:17.004234 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetMachineName
	I0717 19:58:17.004464 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.007043 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.007337 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.007371 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.007517 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.007730 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.007933 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.008071 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.008245 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:17.008906 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:17.008927 1102136 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-408472 && echo "no-preload-408472" | sudo tee /etc/hostname
	I0717 19:58:17.143779 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-408472
	
	I0717 19:58:17.143816 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.146881 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.147332 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.147384 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.147556 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.147807 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.147990 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.148137 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.148320 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:17.148789 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:17.148811 1102136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-408472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-408472/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-408472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:58:17.279254 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:58:17.279292 1102136 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:58:17.279339 1102136 buildroot.go:174] setting up certificates
	I0717 19:58:17.279375 1102136 provision.go:83] configureAuth start
	I0717 19:58:17.279390 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetMachineName
	I0717 19:58:17.279745 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:17.283125 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.283563 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.283610 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.283837 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.286508 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.286931 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.286975 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.287088 1102136 provision.go:138] copyHostCerts
	I0717 19:58:17.287196 1102136 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:58:17.287210 1102136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:58:17.287299 1102136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:58:17.287430 1102136 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:58:17.287443 1102136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:58:17.287486 1102136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:58:17.287634 1102136 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:58:17.287650 1102136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:58:17.287691 1102136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:58:17.287762 1102136 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.no-preload-408472 san=[192.168.61.65 192.168.61.65 localhost 127.0.0.1 minikube no-preload-408472]
	I0717 19:58:17.492065 1102136 provision.go:172] copyRemoteCerts
	I0717 19:58:17.492172 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:58:17.492209 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.495444 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.495931 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.495971 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.496153 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.496406 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.496605 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.496793 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:58:17.588540 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:58:17.613378 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 19:58:17.638066 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:58:17.662222 1102136 provision.go:86] duration metric: configureAuth took 382.813668ms
	I0717 19:58:17.662267 1102136 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:58:17.662522 1102136 config.go:182] Loaded profile config "no-preload-408472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:58:17.662613 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.665914 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.666415 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.666475 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.666673 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.666934 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.667122 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.667287 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.667466 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:17.667885 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:17.667903 1102136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:58:17.997416 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:58:17.997461 1102136 machine.go:91] provisioned docker machine in 993.802909ms
	I0717 19:58:17.997476 1102136 start.go:300] post-start starting for "no-preload-408472" (driver="kvm2")
	I0717 19:58:17.997490 1102136 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:58:17.997533 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:17.997925 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:58:17.998013 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.000755 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.001185 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.001210 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.001409 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.001682 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.001892 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.002059 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:58:18.093738 1102136 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:58:18.098709 1102136 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:58:18.098744 1102136 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:58:18.098854 1102136 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:58:18.098974 1102136 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:58:18.099098 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:58:18.110195 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:18.135572 1102136 start.go:303] post-start completed in 138.074603ms
	I0717 19:58:18.135628 1102136 fix.go:56] fixHost completed within 24.21376423s
	I0717 19:58:18.135652 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.139033 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.139617 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.139656 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.139847 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.140146 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.140366 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.140612 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.140819 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:18.141265 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:18.141282 1102136 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:58:18.263053 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623898.247474645
	
	I0717 19:58:18.263080 1102136 fix.go:206] guest clock: 1689623898.247474645
	I0717 19:58:18.263096 1102136 fix.go:219] Guest: 2023-07-17 19:58:18.247474645 +0000 UTC Remote: 2023-07-17 19:58:18.135632998 +0000 UTC m=+289.513196741 (delta=111.841647ms)
	I0717 19:58:18.263124 1102136 fix.go:190] guest clock delta is within tolerance: 111.841647ms
	I0717 19:58:18.263132 1102136 start.go:83] releasing machines lock for "no-preload-408472", held for 24.341313825s
	I0717 19:58:18.263184 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.263451 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:18.266352 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.266707 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.266732 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.266920 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.267684 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.267935 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.268033 1102136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:58:18.268095 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.268205 1102136 ssh_runner.go:195] Run: cat /version.json
	I0717 19:58:18.268249 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.270983 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271223 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271324 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.271385 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271494 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.271608 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.271628 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271697 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.271879 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.271895 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.272094 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.272099 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:58:18.272253 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.272419 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	W0717 19:58:18.395775 1102136 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:58:18.395916 1102136 ssh_runner.go:195] Run: systemctl --version
	I0717 19:58:18.403799 1102136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:58:18.557449 1102136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:58:18.564470 1102136 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:58:18.564580 1102136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:58:18.580344 1102136 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:58:18.580386 1102136 start.go:469] detecting cgroup driver to use...
	I0717 19:58:18.580482 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:58:18.595052 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:58:18.608844 1102136 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:58:18.608948 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:58:18.621908 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:58:18.635796 1102136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:58:18.290375 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Start
	I0717 19:58:18.290615 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Ensuring networks are active...
	I0717 19:58:18.291470 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Ensuring network default is active
	I0717 19:58:18.292041 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Ensuring network mk-default-k8s-diff-port-711413 is active
	I0717 19:58:18.292477 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Getting domain xml...
	I0717 19:58:18.293393 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Creating domain...
	I0717 19:58:18.751368 1102136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:58:18.878097 1102136 docker.go:212] disabling docker service ...
	I0717 19:58:18.878186 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:58:18.895094 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:58:18.909958 1102136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:58:19.032014 1102136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:58:19.141917 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:58:19.158474 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:58:19.178688 1102136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:58:19.178767 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.189949 1102136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:58:19.190059 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.201270 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.212458 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.226193 1102136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:58:19.239919 1102136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:58:19.251627 1102136 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:58:19.251711 1102136 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:58:19.268984 1102136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:58:19.281898 1102136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:58:19.390523 1102136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:58:19.599827 1102136 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:58:19.599937 1102136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:58:19.605741 1102136 start.go:537] Will wait 60s for crictl version
	I0717 19:58:19.605810 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:19.610347 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:58:19.653305 1102136 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:58:19.653418 1102136 ssh_runner.go:195] Run: crio --version
	I0717 19:58:19.712418 1102136 ssh_runner.go:195] Run: crio --version
	I0717 19:58:19.773012 1102136 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:58:19.775099 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:19.778530 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:19.779127 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:19.779167 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:19.779477 1102136 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 19:58:19.784321 1102136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:19.797554 1102136 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:58:19.797682 1102136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:58:19.833548 1102136 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:58:19.833590 1102136 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:19.833749 1102136 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:19.833760 1102136 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:19.833787 1102136 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0717 19:58:19.833821 1102136 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:19.835432 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:19.835461 1102136 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:19.835497 1102136 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:19.835492 1102136 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0717 19:58:19.835432 1102136 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:19.835432 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:19.835463 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:19.835436 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.032458 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.032526 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:20.035507 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:20.035509 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:20.041878 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:20.056915 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0717 19:58:20.099112 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:20.119661 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:20.195250 1102136 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0717 19:58:20.195338 1102136 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0717 19:58:20.195384 1102136 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:20.195441 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.195348 1102136 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.195521 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.212109 1102136 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0717 19:58:20.212185 1102136 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:20.212255 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.232021 1102136 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0717 19:58:20.232077 1102136 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:20.232126 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.232224 1102136 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0717 19:58:20.232257 1102136 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:20.232287 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.363363 1102136 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0717 19:58:20.363425 1102136 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:20.363470 1102136 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 19:58:20.363498 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:20.363529 1102136 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:20.363483 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.363579 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.363660 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:20.363569 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.363722 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:20.363783 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:20.368457 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:20.469461 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0717 19:58:20.469647 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 19:58:20.476546 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0717 19:58:20.476613 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:20.476657 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0717 19:58:20.476703 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.7-0
	I0717 19:58:20.476751 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 19:58:20.476824 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0717 19:58:20.476918 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0717 19:58:20.483915 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0717 19:58:20.483949 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.27.3 (exists)
	I0717 19:58:20.483966 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 19:58:20.483970 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0717 19:58:20.484015 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 19:58:20.484030 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 19:58:20.484067 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 19:58:20.532090 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0717 19:58:20.532113 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.7-0 (exists)
	I0717 19:58:20.532194 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 19:58:20.532213 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.27.3 (exists)
	I0717 19:58:20.532304 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:58:19.668342 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting to get IP...
	I0717 19:58:19.669327 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.669868 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.669996 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:19.669860 1103407 retry.go:31] will retry after 270.908859ms: waiting for machine to come up
	I0717 19:58:19.942914 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.943490 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.943524 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:19.943434 1103407 retry.go:31] will retry after 387.572792ms: waiting for machine to come up
	I0717 19:58:20.333302 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.333904 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.333934 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:20.333842 1103407 retry.go:31] will retry after 325.807844ms: waiting for machine to come up
	I0717 19:58:20.661438 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.661890 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.661926 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:20.661828 1103407 retry.go:31] will retry after 492.482292ms: waiting for machine to come up
	I0717 19:58:21.155613 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.156184 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.156212 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:21.156089 1103407 retry.go:31] will retry after 756.388438ms: waiting for machine to come up
	I0717 19:58:21.914212 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.914770 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.914806 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:21.914695 1103407 retry.go:31] will retry after 754.504649ms: waiting for machine to come up
	I0717 19:58:22.670913 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:22.671334 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:22.671369 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:22.671278 1103407 retry.go:31] will retry after 790.272578ms: waiting for machine to come up
	I0717 19:58:23.463657 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:23.464118 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:23.464145 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:23.464042 1103407 retry.go:31] will retry after 1.267289365s: waiting for machine to come up
	I0717 19:58:23.707718 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3: (3.223672376s)
	I0717 19:58:23.707750 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 from cache
	I0717 19:58:23.707788 1102136 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0717 19:58:23.707804 1102136 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3: (3.223748615s)
	I0717 19:58:23.707842 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.27.3 (exists)
	I0717 19:58:23.707856 1102136 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3: (3.223769648s)
	I0717 19:58:23.707862 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0717 19:58:23.707878 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.27.3 (exists)
	I0717 19:58:23.707908 1102136 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.175586566s)
	I0717 19:58:23.707926 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 19:58:24.960652 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.252755334s)
	I0717 19:58:24.960691 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0717 19:58:24.960722 1102136 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.7-0
	I0717 19:58:24.960770 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0
	I0717 19:58:24.733590 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:24.734140 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:24.734176 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:24.734049 1103407 retry.go:31] will retry after 1.733875279s: waiting for machine to come up
	I0717 19:58:26.470148 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:26.470587 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:26.470640 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:26.470522 1103407 retry.go:31] will retry after 1.829632979s: waiting for machine to come up
	I0717 19:58:28.301973 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:28.302506 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:28.302560 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:28.302421 1103407 retry.go:31] will retry after 2.201530837s: waiting for machine to come up
	I0717 19:58:32.118558 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0: (7.157750323s)
	I0717 19:58:32.118606 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 from cache
	I0717 19:58:32.118641 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 19:58:32.118700 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 19:58:33.577369 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3: (1.458638516s)
	I0717 19:58:33.577400 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 from cache
	I0717 19:58:33.577447 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 19:58:33.577595 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 19:58:30.507029 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:30.507586 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:30.507647 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:30.507447 1103407 retry.go:31] will retry after 2.947068676s: waiting for machine to come up
	I0717 19:58:33.456714 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:33.457232 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:33.457261 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:33.457148 1103407 retry.go:31] will retry after 3.074973516s: waiting for machine to come up
	I0717 19:58:37.871095 1103141 start.go:369] acquired machines lock for "embed-certs-114855" in 1m22.018672602s
	I0717 19:58:37.871161 1103141 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:58:37.871175 1103141 fix.go:54] fixHost starting: 
	I0717 19:58:37.871619 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:58:37.871689 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:58:37.889865 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46381
	I0717 19:58:37.890334 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:58:37.891044 1103141 main.go:141] libmachine: Using API Version  1
	I0717 19:58:37.891070 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:58:37.891471 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:58:37.891734 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:58:37.891927 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 19:58:37.893736 1103141 fix.go:102] recreateIfNeeded on embed-certs-114855: state=Stopped err=<nil>
	I0717 19:58:37.893779 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	W0717 19:58:37.893994 1103141 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:58:37.896545 1103141 out.go:177] * Restarting existing kvm2 VM for "embed-certs-114855" ...
	I0717 19:58:35.345141 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3: (1.767506173s)
	I0717 19:58:35.345180 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 from cache
	I0717 19:58:35.345211 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 19:58:35.345273 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 19:58:37.803066 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3: (2.457743173s)
	I0717 19:58:37.803106 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 from cache
	I0717 19:58:37.803139 1102136 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:58:37.803193 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:58:38.559165 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 19:58:38.559222 1102136 cache_images.go:123] Successfully loaded all cached images
	I0717 19:58:38.559231 1102136 cache_images.go:92] LoadImages completed in 18.725611601s
	I0717 19:58:38.559363 1102136 ssh_runner.go:195] Run: crio config
	I0717 19:58:38.630364 1102136 cni.go:84] Creating CNI manager for ""
	I0717 19:58:38.630394 1102136 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:58:38.630421 1102136 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:58:38.630447 1102136 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.65 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-408472 NodeName:no-preload-408472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:58:38.630640 1102136 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-408472"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:58:38.630739 1102136 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-408472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:no-preload-408472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:58:38.630813 1102136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:58:38.643343 1102136 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:58:38.643443 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:58:38.653495 1102136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0717 19:58:36.535628 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.536224 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Found IP for machine: 192.168.72.51
	I0717 19:58:36.536256 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Reserving static IP address...
	I0717 19:58:36.536278 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has current primary IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.536720 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-711413", mac: "52:54:00:7d:d7:a9", ip: "192.168.72.51"} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.536756 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | skip adding static IP to network mk-default-k8s-diff-port-711413 - found existing host DHCP lease matching {name: "default-k8s-diff-port-711413", mac: "52:54:00:7d:d7:a9", ip: "192.168.72.51"}
	I0717 19:58:36.536773 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Reserved static IP address: 192.168.72.51
	I0717 19:58:36.536791 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for SSH to be available...
	I0717 19:58:36.536804 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Getting to WaitForSSH function...
	I0717 19:58:36.540038 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.540593 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.540649 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.540764 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Using SSH client type: external
	I0717 19:58:36.540799 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa (-rw-------)
	I0717 19:58:36.540855 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:58:36.540876 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | About to run SSH command:
	I0717 19:58:36.540895 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | exit 0
	I0717 19:58:36.637774 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | SSH cmd err, output: <nil>: 
	I0717 19:58:36.638200 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetConfigRaw
	I0717 19:58:36.638931 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:36.642048 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.642530 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.642560 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.642850 1102415 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/config.json ...
	I0717 19:58:36.643061 1102415 machine.go:88] provisioning docker machine ...
	I0717 19:58:36.643080 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:36.643344 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetMachineName
	I0717 19:58:36.643516 1102415 buildroot.go:166] provisioning hostname "default-k8s-diff-port-711413"
	I0717 19:58:36.643535 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetMachineName
	I0717 19:58:36.643766 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:36.646810 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.647205 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.647243 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.647582 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:36.647826 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.648082 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.648275 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:36.648470 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:36.648883 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:36.648898 1102415 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-711413 && echo "default-k8s-diff-port-711413" | sudo tee /etc/hostname
	I0717 19:58:36.784478 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-711413
	
	I0717 19:58:36.784524 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:36.787641 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.788065 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.788118 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.788363 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:36.788605 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.788799 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.788942 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:36.789239 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:36.789869 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:36.789916 1102415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-711413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-711413/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-711413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:58:36.923177 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:58:36.923211 1102415 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:58:36.923237 1102415 buildroot.go:174] setting up certificates
	I0717 19:58:36.923248 1102415 provision.go:83] configureAuth start
	I0717 19:58:36.923257 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetMachineName
	I0717 19:58:36.923633 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:36.927049 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.927406 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.927443 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.927641 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:36.930158 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.930705 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.930771 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.930844 1102415 provision.go:138] copyHostCerts
	I0717 19:58:36.930969 1102415 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:58:36.930984 1102415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:58:36.931064 1102415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:58:36.931188 1102415 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:58:36.931201 1102415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:58:36.931235 1102415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:58:36.931315 1102415 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:58:36.931325 1102415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:58:36.931353 1102415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:58:36.931423 1102415 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-711413 san=[192.168.72.51 192.168.72.51 localhost 127.0.0.1 minikube default-k8s-diff-port-711413]
	I0717 19:58:37.043340 1102415 provision.go:172] copyRemoteCerts
	I0717 19:58:37.043444 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:58:37.043487 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.047280 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.047842 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.047879 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.048143 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.048410 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.048648 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.048844 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:58:37.147255 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:58:37.175437 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 19:58:37.202827 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:58:37.231780 1102415 provision.go:86] duration metric: configureAuth took 308.515103ms
	I0717 19:58:37.231818 1102415 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:58:37.232118 1102415 config.go:182] Loaded profile config "default-k8s-diff-port-711413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:58:37.232255 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.235364 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.235964 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.236011 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.236296 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.236533 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.236793 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.236976 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.237175 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:37.237831 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:37.237866 1102415 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:58:37.601591 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:58:37.601631 1102415 machine.go:91] provisioned docker machine in 958.556319ms
	I0717 19:58:37.601644 1102415 start.go:300] post-start starting for "default-k8s-diff-port-711413" (driver="kvm2")
	I0717 19:58:37.601665 1102415 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:58:37.601692 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.602018 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:58:37.602048 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.604964 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.605335 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.605387 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.605486 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.605822 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.606033 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.606224 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:58:37.696316 1102415 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:58:37.701409 1102415 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:58:37.701442 1102415 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:58:37.701579 1102415 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:58:37.701694 1102415 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:58:37.701827 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:58:37.711545 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:37.739525 1102415 start.go:303] post-start completed in 137.838589ms
	I0717 19:58:37.739566 1102415 fix.go:56] fixHost completed within 19.476203721s
	I0717 19:58:37.739599 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.742744 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.743095 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.743127 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.743298 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.743568 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.743768 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.743929 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.744164 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:37.744786 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:37.744809 1102415 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:58:37.870894 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623917.842259641
	
	I0717 19:58:37.870923 1102415 fix.go:206] guest clock: 1689623917.842259641
	I0717 19:58:37.870931 1102415 fix.go:219] Guest: 2023-07-17 19:58:37.842259641 +0000 UTC Remote: 2023-07-17 19:58:37.739572977 +0000 UTC m=+273.789942316 (delta=102.686664ms)
	I0717 19:58:37.870992 1102415 fix.go:190] guest clock delta is within tolerance: 102.686664ms
	I0717 19:58:37.871004 1102415 start.go:83] releasing machines lock for "default-k8s-diff-port-711413", held for 19.607687828s
	I0717 19:58:37.871044 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.871350 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:37.874527 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.874967 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.875029 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.875202 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.875791 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.876007 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.876141 1102415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:58:37.876211 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.876261 1102415 ssh_runner.go:195] Run: cat /version.json
	I0717 19:58:37.876289 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.879243 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.879483 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.879717 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.879752 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.879861 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.880090 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.880098 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.880118 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.880204 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.880335 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.880427 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.880513 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:58:37.880582 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.880714 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	W0717 19:58:37.967909 1102415 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:58:37.968017 1102415 ssh_runner.go:195] Run: systemctl --version
	I0717 19:58:37.997996 1102415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:58:38.148654 1102415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:58:38.156049 1102415 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:58:38.156151 1102415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:58:38.177835 1102415 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:58:38.177866 1102415 start.go:469] detecting cgroup driver to use...
	I0717 19:58:38.177945 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:58:38.196359 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:58:38.209697 1102415 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:58:38.209777 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:58:38.226250 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:58:38.244868 1102415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:58:38.385454 1102415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:58:38.527891 1102415 docker.go:212] disabling docker service ...
	I0717 19:58:38.527973 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:58:38.546083 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:58:38.562767 1102415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:58:38.702706 1102415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:58:38.828923 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:58:38.845137 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:58:38.866427 1102415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:58:38.866511 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.878067 1102415 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:58:38.878157 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.892494 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.905822 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.917786 1102415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:58:38.931418 1102415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:58:38.945972 1102415 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:58:38.946039 1102415 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:58:38.964498 1102415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:58:38.977323 1102415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:58:39.098593 1102415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:58:39.320821 1102415 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:58:39.320909 1102415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:58:39.327195 1102415 start.go:537] Will wait 60s for crictl version
	I0717 19:58:39.327285 1102415 ssh_runner.go:195] Run: which crictl
	I0717 19:58:39.333466 1102415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:58:39.372542 1102415 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:58:39.372643 1102415 ssh_runner.go:195] Run: crio --version
	I0717 19:58:39.419356 1102415 ssh_runner.go:195] Run: crio --version
	I0717 19:58:39.467405 1102415 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:58:37.898938 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Start
	I0717 19:58:37.899185 1103141 main.go:141] libmachine: (embed-certs-114855) Ensuring networks are active...
	I0717 19:58:37.900229 1103141 main.go:141] libmachine: (embed-certs-114855) Ensuring network default is active
	I0717 19:58:37.900690 1103141 main.go:141] libmachine: (embed-certs-114855) Ensuring network mk-embed-certs-114855 is active
	I0717 19:58:37.901444 1103141 main.go:141] libmachine: (embed-certs-114855) Getting domain xml...
	I0717 19:58:37.902311 1103141 main.go:141] libmachine: (embed-certs-114855) Creating domain...
	I0717 19:58:39.293109 1103141 main.go:141] libmachine: (embed-certs-114855) Waiting to get IP...
	I0717 19:58:39.294286 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:39.294784 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:39.294877 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:39.294761 1103558 retry.go:31] will retry after 201.93591ms: waiting for machine to come up
	I0717 19:58:39.498428 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:39.499066 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:39.499123 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:39.498979 1103558 retry.go:31] will retry after 321.702493ms: waiting for machine to come up
	I0717 19:58:39.822708 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:39.823258 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:39.823287 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:39.823212 1103558 retry.go:31] will retry after 477.114259ms: waiting for machine to come up
	I0717 19:58:40.302080 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:40.302727 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:40.302755 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:40.302668 1103558 retry.go:31] will retry after 554.321931ms: waiting for machine to come up
	I0717 19:58:38.674825 1102136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:58:38.697168 1102136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0717 19:58:38.719030 1102136 ssh_runner.go:195] Run: grep 192.168.61.65	control-plane.minikube.internal$ /etc/hosts
	I0717 19:58:38.724312 1102136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:38.742726 1102136 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472 for IP: 192.168.61.65
	I0717 19:58:38.742830 1102136 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:58:38.743029 1102136 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:58:38.743082 1102136 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:58:38.743238 1102136 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.key
	I0717 19:58:38.743316 1102136 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/apiserver.key.71349e66
	I0717 19:58:38.743370 1102136 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/proxy-client.key
	I0717 19:58:38.743527 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:58:38.743579 1102136 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:58:38.743597 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:58:38.743631 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:58:38.743667 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:58:38.743699 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:58:38.743759 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:38.744668 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:58:38.773602 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:58:38.799675 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:58:38.826050 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:58:38.856973 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:58:38.886610 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:58:38.916475 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:58:38.945986 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:58:38.973415 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:58:39.002193 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:58:39.030265 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:58:39.062896 1102136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:58:39.082877 1102136 ssh_runner.go:195] Run: openssl version
	I0717 19:58:39.090088 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:58:39.104372 1102136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:58:39.110934 1102136 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:58:39.111023 1102136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:58:39.117702 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:58:39.132094 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:58:39.149143 1102136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:58:39.155238 1102136 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:58:39.155359 1102136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:58:39.164149 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:58:39.178830 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:58:39.192868 1102136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:39.199561 1102136 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:39.199663 1102136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:39.208054 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:58:39.220203 1102136 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:58:39.228030 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:58:39.235220 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:58:39.243450 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:58:39.250709 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:58:39.260912 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:58:39.269318 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:58:39.277511 1102136 kubeadm.go:404] StartCluster: {Name:no-preload-408472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-408472 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.65 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:58:39.277701 1102136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:58:39.277789 1102136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:39.317225 1102136 cri.go:89] found id: ""
	I0717 19:58:39.317321 1102136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:58:39.330240 1102136 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:58:39.330274 1102136 kubeadm.go:636] restartCluster start
	I0717 19:58:39.330351 1102136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:58:39.343994 1102136 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:39.345762 1102136 kubeconfig.go:92] found "no-preload-408472" server: "https://192.168.61.65:8443"
	I0717 19:58:39.350027 1102136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:58:39.360965 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:39.361039 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:39.375103 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:39.875778 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:39.875891 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:39.892869 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:40.375344 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:40.375421 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:40.392992 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:40.875474 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:40.875590 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:40.892666 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:41.375224 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:41.375335 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:41.393833 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:41.875377 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:41.875466 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:41.893226 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:42.375846 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:42.375957 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:42.390397 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:42.876105 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:42.876220 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:42.889082 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:43.375654 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:43.375774 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:43.392598 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:39.469543 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:39.472792 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:39.473333 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:39.473386 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:39.473640 1102415 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:58:39.478276 1102415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:39.491427 1102415 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:58:39.491514 1102415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:58:39.527759 1102415 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:58:39.527856 1102415 ssh_runner.go:195] Run: which lz4
	I0717 19:58:39.532935 1102415 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:58:39.537733 1102415 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:58:39.537785 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 19:58:41.480847 1102415 crio.go:444] Took 1.947975 seconds to copy over tarball
	I0717 19:58:41.480932 1102415 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:58:40.858380 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:40.858925 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:40.858970 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:40.858865 1103558 retry.go:31] will retry after 616.432145ms: waiting for machine to come up
	I0717 19:58:41.476868 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:41.477399 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:41.477434 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:41.477348 1103558 retry.go:31] will retry after 780.737319ms: waiting for machine to come up
	I0717 19:58:42.259853 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:42.260278 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:42.260310 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:42.260216 1103558 retry.go:31] will retry after 858.918849ms: waiting for machine to come up
	I0717 19:58:43.120599 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:43.121211 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:43.121247 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:43.121155 1103558 retry.go:31] will retry after 1.359881947s: waiting for machine to come up
	I0717 19:58:44.482733 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:44.483173 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:44.483203 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:44.483095 1103558 retry.go:31] will retry after 1.298020016s: waiting for machine to come up
	I0717 19:58:43.875260 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:43.875367 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:43.892010 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:44.376275 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:44.376378 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:44.394725 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:44.875258 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:44.875377 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:44.890500 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.376203 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.376337 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.392119 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.875466 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.875573 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.888488 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.376141 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.376288 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.391072 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.875635 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.875797 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.895087 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.375551 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.375653 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.392620 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.875197 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.875340 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.887934 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:48.375469 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.375588 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.392548 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:44.570404 1102415 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.089433908s)
	I0717 19:58:44.570451 1102415 crio.go:451] Took 3.089562 seconds to extract the tarball
	I0717 19:58:44.570465 1102415 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:58:44.615062 1102415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:58:44.660353 1102415 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:58:44.660385 1102415 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:58:44.660468 1102415 ssh_runner.go:195] Run: crio config
	I0717 19:58:44.726880 1102415 cni.go:84] Creating CNI manager for ""
	I0717 19:58:44.726915 1102415 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:58:44.726946 1102415 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:58:44.726973 1102415 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.51 APIServerPort:8444 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-711413 NodeName:default-k8s-diff-port-711413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:58:44.727207 1102415 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.51
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-711413"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:58:44.727340 1102415 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-711413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-711413 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0717 19:58:44.727430 1102415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:58:44.740398 1102415 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:58:44.740509 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:58:44.751288 1102415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0717 19:58:44.769779 1102415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:58:44.788216 1102415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0717 19:58:44.808085 1102415 ssh_runner.go:195] Run: grep 192.168.72.51	control-plane.minikube.internal$ /etc/hosts
	I0717 19:58:44.812829 1102415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:44.826074 1102415 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413 for IP: 192.168.72.51
	I0717 19:58:44.826123 1102415 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:58:44.826373 1102415 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:58:44.826440 1102415 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:58:44.826543 1102415 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.key
	I0717 19:58:44.826629 1102415 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/apiserver.key.f6db28d6
	I0717 19:58:44.826697 1102415 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/proxy-client.key
	I0717 19:58:44.826855 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:58:44.826902 1102415 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:58:44.826915 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:58:44.826953 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:58:44.826988 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:58:44.827026 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:58:44.827091 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:44.828031 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:58:44.856357 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:58:44.884042 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:58:44.915279 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:58:44.945170 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:58:44.974151 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:58:45.000387 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:58:45.027617 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:58:45.054305 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:58:45.080828 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:58:45.107437 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:58:45.135588 1102415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:58:45.155297 1102415 ssh_runner.go:195] Run: openssl version
	I0717 19:58:45.162096 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:58:45.175077 1102415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:58:45.180966 1102415 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:58:45.181050 1102415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:58:45.187351 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:58:45.199795 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:58:45.214273 1102415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:45.220184 1102415 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:45.220269 1102415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:45.227207 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:58:45.239921 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:58:45.252978 1102415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:58:45.259164 1102415 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:58:45.259257 1102415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:58:45.266134 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:58:45.281302 1102415 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:58:45.287179 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:58:45.294860 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:58:45.302336 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:58:45.309621 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:58:45.316590 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:58:45.323564 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:58:45.330904 1102415 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-711413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port
-711413 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.51 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:58:45.331050 1102415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:58:45.331115 1102415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:45.368522 1102415 cri.go:89] found id: ""
	I0717 19:58:45.368606 1102415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:58:45.380610 1102415 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:58:45.380640 1102415 kubeadm.go:636] restartCluster start
	I0717 19:58:45.380711 1102415 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:58:45.391395 1102415 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.392845 1102415 kubeconfig.go:92] found "default-k8s-diff-port-711413" server: "https://192.168.72.51:8444"
	I0717 19:58:45.395718 1102415 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:58:45.405869 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.405954 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.417987 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.918789 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.918924 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.935620 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.418786 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.418918 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.435879 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.918441 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.918570 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.934753 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.418315 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.418429 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.434411 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.918984 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.919143 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.930556 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:48.418827 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.418915 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.430779 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:48.918288 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.918395 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.929830 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.782651 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:45.853667 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:45.853691 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:45.783094 1103558 retry.go:31] will retry after 2.002921571s: waiting for machine to come up
	I0717 19:58:47.788455 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:47.788965 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:47.788995 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:47.788914 1103558 retry.go:31] will retry after 2.108533646s: waiting for machine to come up
	I0717 19:58:49.899541 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:49.900028 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:49.900073 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:49.899974 1103558 retry.go:31] will retry after 3.529635748s: waiting for machine to come up
	I0717 19:58:48.875708 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.875803 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.893686 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:49.362030 1102136 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:58:49.362079 1102136 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:58:49.362096 1102136 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:58:49.362166 1102136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:49.405900 1102136 cri.go:89] found id: ""
	I0717 19:58:49.405997 1102136 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:58:49.429666 1102136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:58:49.440867 1102136 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:58:49.440938 1102136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:49.454993 1102136 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:49.455023 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:49.606548 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.568083 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.782373 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.895178 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.999236 1102136 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:58:50.999321 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:51.519969 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:52.019769 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:52.519618 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:53.020330 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:53.519378 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:53.549727 1102136 api_server.go:72] duration metric: took 2.550491567s to wait for apiserver process to appear ...
	I0717 19:58:53.549757 1102136 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:58:53.549778 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:49.418724 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:49.418839 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:49.431867 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:49.918433 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:49.918602 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:49.933324 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:50.418991 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:50.419113 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:50.433912 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:50.919128 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:50.919228 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:50.934905 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:51.418418 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:51.418557 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:51.430640 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:51.918136 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:51.918248 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:51.933751 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:52.418277 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:52.418388 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:52.434907 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:52.918570 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:52.918702 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:52.933426 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:53.418734 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:53.418828 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:53.431710 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:53.918381 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:53.918502 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:53.930053 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:53.431544 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:53.432055 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:53.432087 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:53.431995 1103558 retry.go:31] will retry after 3.133721334s: waiting for machine to come up
	I0717 19:58:57.990532 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:58:57.990581 1102136 api_server.go:103] status: https://192.168.61.65:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:58:58.491387 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:58.501594 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:58:58.501636 1102136 api_server.go:103] status: https://192.168.61.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:58:54.418156 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:54.418290 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:54.430262 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:54.918831 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:54.918933 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:54.930380 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:55.406385 1102415 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:58:55.406432 1102415 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:58:55.406451 1102415 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:58:55.406530 1102415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:55.444364 1102415 cri.go:89] found id: ""
	I0717 19:58:55.444447 1102415 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:58:55.460367 1102415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:58:55.472374 1102415 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:58:55.472469 1102415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:55.482078 1102415 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:55.482121 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:55.630428 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.221310 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.460424 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.570707 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.691954 1102415 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:58:56.692053 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:57.209115 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:57.708801 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:58.209204 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:58.709268 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:58.991630 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:58.999253 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:58:58.999295 1102136 api_server.go:103] status: https://192.168.61.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:58:59.491062 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:59.498441 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 200:
	ok
	I0717 19:58:59.514314 1102136 api_server.go:141] control plane version: v1.27.3
	I0717 19:58:59.514353 1102136 api_server.go:131] duration metric: took 5.964587051s to wait for apiserver health ...
	I0717 19:58:59.514368 1102136 cni.go:84] Creating CNI manager for ""
	I0717 19:58:59.514403 1102136 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:58:59.516809 1102136 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:58:56.567585 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:56.568167 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:56.568203 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:56.568069 1103558 retry.go:31] will retry after 4.627498539s: waiting for machine to come up
	I0717 19:58:59.518908 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:58:59.549246 1102136 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:58:59.598652 1102136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:58:59.614418 1102136 system_pods.go:59] 8 kube-system pods found
	I0717 19:58:59.614482 1102136 system_pods.go:61] "coredns-5d78c9869d-9mxdj" [fbff09fd-436d-4208-9187-b6312aa1c223] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:58:59.614506 1102136 system_pods.go:61] "etcd-no-preload-408472" [7125f19d-c1ed-4b1f-99be-207bbf5d8c70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:58:59.614519 1102136 system_pods.go:61] "kube-apiserver-no-preload-408472" [0e54eaed-daad-434a-ad3b-96f7fb924099] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:58:59.614529 1102136 system_pods.go:61] "kube-controller-manager-no-preload-408472" [38ee8079-8142-45a9-8d5a-4abbc5c8bb3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:58:59.614547 1102136 system_pods.go:61] "kube-proxy-cntdn" [8653567b-abf9-468c-a030-45fc53fa0cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 19:58:59.614558 1102136 system_pods.go:61] "kube-scheduler-no-preload-408472" [e51560a1-c1b0-407c-8635-df512bd033b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:58:59.614575 1102136 system_pods.go:61] "metrics-server-74d5c6b9c-hnngh" [dfff837e-dbba-4795-935d-9562d2744169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:58:59.614637 1102136 system_pods.go:61] "storage-provisioner" [1aefd8ef-dec9-4e37-8648-8e5a62622cd3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 19:58:59.614658 1102136 system_pods.go:74] duration metric: took 15.975122ms to wait for pod list to return data ...
	I0717 19:58:59.614669 1102136 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:58:59.621132 1102136 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:58:59.621181 1102136 node_conditions.go:123] node cpu capacity is 2
	I0717 19:58:59.621197 1102136 node_conditions.go:105] duration metric: took 6.519635ms to run NodePressure ...
	I0717 19:58:59.621224 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:59.909662 1102136 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:58:59.915153 1102136 kubeadm.go:787] kubelet initialised
	I0717 19:58:59.915190 1102136 kubeadm.go:788] duration metric: took 5.491139ms waiting for restarted kubelet to initialise ...
	I0717 19:58:59.915201 1102136 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:58:59.925196 1102136 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	I0717 19:58:59.934681 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.934715 1102136 pod_ready.go:81] duration metric: took 9.478384ms waiting for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	E0717 19:58:59.934728 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.934742 1102136 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:58:59.949704 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "etcd-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.949744 1102136 pod_ready.go:81] duration metric: took 14.992167ms waiting for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:58:59.949757 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "etcd-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.949766 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:58:59.958029 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-apiserver-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.958083 1102136 pod_ready.go:81] duration metric: took 8.306713ms waiting for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:58:59.958096 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-apiserver-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.958110 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:00.003638 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.003689 1102136 pod_ready.go:81] duration metric: took 45.565817ms waiting for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:00.003702 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.003714 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:00.403384 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-proxy-cntdn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.403421 1102136 pod_ready.go:81] duration metric: took 399.697327ms waiting for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:00.403431 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-proxy-cntdn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.403440 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:00.803159 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-scheduler-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.803192 1102136 pod_ready.go:81] duration metric: took 399.744356ms waiting for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:00.803205 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-scheduler-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.803217 1102136 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:01.206222 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:01.206247 1102136 pod_ready.go:81] duration metric: took 403.0216ms waiting for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:01.206256 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:01.206271 1102136 pod_ready.go:38] duration metric: took 1.291054316s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:01.206293 1102136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:59:01.225481 1102136 ops.go:34] apiserver oom_adj: -16
	I0717 19:59:01.225516 1102136 kubeadm.go:640] restartCluster took 21.895234291s
	I0717 19:59:01.225528 1102136 kubeadm.go:406] StartCluster complete in 21.948029137s
	I0717 19:59:01.225551 1102136 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:01.225672 1102136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:59:01.228531 1102136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:01.228913 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:59:01.229088 1102136 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:59:01.229192 1102136 config.go:182] Loaded profile config "no-preload-408472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:59:01.229244 1102136 addons.go:69] Setting metrics-server=true in profile "no-preload-408472"
	I0717 19:59:01.229249 1102136 addons.go:69] Setting default-storageclass=true in profile "no-preload-408472"
	I0717 19:59:01.229280 1102136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-408472"
	I0717 19:59:01.229299 1102136 addons.go:231] Setting addon metrics-server=true in "no-preload-408472"
	W0717 19:59:01.229307 1102136 addons.go:240] addon metrics-server should already be in state true
	I0717 19:59:01.229241 1102136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-408472"
	I0717 19:59:01.229353 1102136 addons.go:231] Setting addon storage-provisioner=true in "no-preload-408472"
	W0717 19:59:01.229366 1102136 addons.go:240] addon storage-provisioner should already be in state true
	I0717 19:59:01.229440 1102136 host.go:66] Checking if "no-preload-408472" exists ...
	I0717 19:59:01.229447 1102136 host.go:66] Checking if "no-preload-408472" exists ...
	I0717 19:59:01.229764 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.229804 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.229833 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.229854 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.229897 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.229943 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.235540 1102136 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-408472" context rescaled to 1 replicas
	I0717 19:59:01.235641 1102136 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.65 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:59:01.239320 1102136 out.go:177] * Verifying Kubernetes components...
	I0717 19:59:01.241167 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:59:01.247222 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0717 19:59:01.247751 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.248409 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.248438 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.248825 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.249141 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.249823 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
	I0717 19:59:01.249829 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34569
	I0717 19:59:01.250716 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.250724 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.251383 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.251409 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.251591 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.251612 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.252011 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.252078 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.252646 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.252679 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.252688 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.252700 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.270584 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I0717 19:59:01.270664 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40173
	I0717 19:59:01.271057 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.271170 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.271634 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.271656 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.271782 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.271807 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.272018 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.272158 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.272237 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.272362 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.274521 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:59:01.274525 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:59:01.277458 1102136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:59:01.279611 1102136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:02.603147 1101908 start.go:369] acquired machines lock for "old-k8s-version-149000" in 1m3.679538618s
	I0717 19:59:02.603207 1101908 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:59:02.603219 1101908 fix.go:54] fixHost starting: 
	I0717 19:59:02.603691 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:02.603736 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:02.625522 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46275
	I0717 19:59:02.626230 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:02.626836 1101908 main.go:141] libmachine: Using API Version  1
	I0717 19:59:02.626876 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:02.627223 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:02.627395 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:02.627513 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 19:59:02.629627 1101908 fix.go:102] recreateIfNeeded on old-k8s-version-149000: state=Stopped err=<nil>
	I0717 19:59:02.629669 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	W0717 19:59:02.629894 1101908 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:59:02.632584 1101908 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-149000" ...
	I0717 19:59:01.279643 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:59:01.281507 1102136 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:01.281513 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:59:01.281520 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:59:01.281545 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:59:01.281545 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:59:01.286403 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.286708 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.286766 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:59:01.286801 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.287001 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:59:01.287264 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:59:01.287523 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:59:01.287525 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:59:01.287606 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.287736 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:59:01.287791 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:59:01.288610 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:59:01.288821 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:59:01.288982 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:59:01.291242 1102136 addons.go:231] Setting addon default-storageclass=true in "no-preload-408472"
	W0717 19:59:01.291259 1102136 addons.go:240] addon default-storageclass should already be in state true
	I0717 19:59:01.291287 1102136 host.go:66] Checking if "no-preload-408472" exists ...
	I0717 19:59:01.291542 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.291569 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.309690 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43629
	I0717 19:59:01.310234 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.310915 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.310944 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.311356 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.311903 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.311953 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.350859 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0717 19:59:01.351342 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.351922 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.351950 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.352334 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.352512 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.354421 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:59:01.354815 1102136 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:01.354832 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:59:01.354853 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:59:01.358180 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.358632 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:59:01.358651 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.358833 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:59:01.359049 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:59:01.359435 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:59:01.359582 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:59:01.510575 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:59:01.510598 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:59:01.534331 1102136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:01.545224 1102136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:01.582904 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:59:01.582945 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:59:01.645312 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:01.645342 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:59:01.715240 1102136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:01.746252 1102136 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:59:01.746249 1102136 node_ready.go:35] waiting up to 6m0s for node "no-preload-408472" to be "Ready" ...
	I0717 19:58:59.208473 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:59.241367 1102415 api_server.go:72] duration metric: took 2.549409381s to wait for apiserver process to appear ...
	I0717 19:58:59.241403 1102415 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:58:59.241432 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:03.909722 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:03.909763 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:03.702857 1102136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.168474279s)
	I0717 19:59:03.702921 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:03.702938 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:03.703307 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:03.703331 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:03.703343 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:03.703353 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:03.703705 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:03.703735 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:03.703753 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:03.703766 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:03.705061 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Closing plugin on server side
	I0717 19:59:03.705164 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:03.705187 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:03.793171 1102136 node_ready.go:58] node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:04.294821 1102136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.749544143s)
	I0717 19:59:04.294904 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.294922 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.295362 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.295380 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.295391 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.295403 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.295470 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Closing plugin on server side
	I0717 19:59:04.295674 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.295703 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.349340 1102136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.634046821s)
	I0717 19:59:04.349410 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.349428 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.349817 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.349837 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.349848 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.349858 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.349864 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Closing plugin on server side
	I0717 19:59:04.350081 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.350097 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.350116 1102136 addons.go:467] Verifying addon metrics-server=true in "no-preload-408472"
	I0717 19:59:04.353040 1102136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 19:59:01.198818 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.199367 1103141 main.go:141] libmachine: (embed-certs-114855) Found IP for machine: 192.168.39.213
	I0717 19:59:01.199394 1103141 main.go:141] libmachine: (embed-certs-114855) Reserving static IP address...
	I0717 19:59:01.199415 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has current primary IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.199879 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "embed-certs-114855", mac: "52:54:00:d6:57:9a", ip: "192.168.39.213"} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.199916 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | skip adding static IP to network mk-embed-certs-114855 - found existing host DHCP lease matching {name: "embed-certs-114855", mac: "52:54:00:d6:57:9a", ip: "192.168.39.213"}
	I0717 19:59:01.199934 1103141 main.go:141] libmachine: (embed-certs-114855) Reserved static IP address: 192.168.39.213
	I0717 19:59:01.199952 1103141 main.go:141] libmachine: (embed-certs-114855) Waiting for SSH to be available...
	I0717 19:59:01.199960 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Getting to WaitForSSH function...
	I0717 19:59:01.202401 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.202876 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.202910 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.203075 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Using SSH client type: external
	I0717 19:59:01.203121 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa (-rw-------)
	I0717 19:59:01.203172 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:59:01.203195 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | About to run SSH command:
	I0717 19:59:01.203208 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | exit 0
	I0717 19:59:01.298366 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | SSH cmd err, output: <nil>: 
	I0717 19:59:01.298876 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetConfigRaw
	I0717 19:59:01.299753 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:01.303356 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.304237 1103141 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/config.json ...
	I0717 19:59:01.304526 1103141 machine.go:88] provisioning docker machine ...
	I0717 19:59:01.304569 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:01.304668 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.304694 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.304847 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetMachineName
	I0717 19:59:01.305079 1103141 buildroot.go:166] provisioning hostname "embed-certs-114855"
	I0717 19:59:01.305103 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetMachineName
	I0717 19:59:01.305324 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.308214 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.308591 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.308630 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.308805 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.309016 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.309195 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.309346 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.309591 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:01.310205 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:01.310227 1103141 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-114855 && echo "embed-certs-114855" | sudo tee /etc/hostname
	I0717 19:59:01.453113 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-114855
	
	I0717 19:59:01.453149 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.456502 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.456918 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.456981 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.457107 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.457291 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.457514 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.457711 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.457923 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:01.458567 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:01.458597 1103141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-114855' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-114855/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-114855' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:59:01.599062 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:59:01.599112 1103141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:59:01.599143 1103141 buildroot.go:174] setting up certificates
	I0717 19:59:01.599161 1103141 provision.go:83] configureAuth start
	I0717 19:59:01.599194 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetMachineName
	I0717 19:59:01.599579 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:01.602649 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.603014 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.603050 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.603218 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.606042 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.606485 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.606531 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.606679 1103141 provision.go:138] copyHostCerts
	I0717 19:59:01.606754 1103141 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:59:01.606767 1103141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:59:01.606839 1103141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:59:01.607009 1103141 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:59:01.607025 1103141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:59:01.607061 1103141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:59:01.607158 1103141 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:59:01.607174 1103141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:59:01.607204 1103141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:59:01.607298 1103141 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.embed-certs-114855 san=[192.168.39.213 192.168.39.213 localhost 127.0.0.1 minikube embed-certs-114855]
	I0717 19:59:01.721082 1103141 provision.go:172] copyRemoteCerts
	I0717 19:59:01.721179 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:59:01.721223 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.724636 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.725093 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.725127 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.725418 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.725708 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.725896 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.726056 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 19:59:01.826710 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 19:59:01.861153 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:59:01.889779 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:59:01.919893 1103141 provision.go:86] duration metric: configureAuth took 320.712718ms
	I0717 19:59:01.919929 1103141 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:59:01.920192 1103141 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:59:01.920283 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.923585 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.926174 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.926264 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.926897 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.927167 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.927365 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.927512 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.927712 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:01.928326 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:01.928350 1103141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:59:02.302372 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:59:02.302427 1103141 machine.go:91] provisioned docker machine in 997.853301ms
	I0717 19:59:02.302441 1103141 start.go:300] post-start starting for "embed-certs-114855" (driver="kvm2")
	I0717 19:59:02.302455 1103141 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:59:02.302487 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.302859 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:59:02.302900 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.305978 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.306544 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.306626 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.306769 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.306996 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.307231 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.307403 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 19:59:02.408835 1103141 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:59:02.415119 1103141 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:59:02.415157 1103141 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:59:02.415256 1103141 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:59:02.415444 1103141 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:59:02.415570 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:59:02.430800 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:02.465311 1103141 start.go:303] post-start completed in 162.851156ms
	I0717 19:59:02.465347 1103141 fix.go:56] fixHost completed within 24.594172049s
	I0717 19:59:02.465375 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.468945 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.469406 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.469443 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.469704 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.469972 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.470166 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.470301 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.470501 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:02.471120 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:02.471159 1103141 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:59:02.602921 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623942.546317761
	
	I0717 19:59:02.602957 1103141 fix.go:206] guest clock: 1689623942.546317761
	I0717 19:59:02.602970 1103141 fix.go:219] Guest: 2023-07-17 19:59:02.546317761 +0000 UTC Remote: 2023-07-17 19:59:02.465351333 +0000 UTC m=+106.772168927 (delta=80.966428ms)
	I0717 19:59:02.603036 1103141 fix.go:190] guest clock delta is within tolerance: 80.966428ms
	I0717 19:59:02.603053 1103141 start.go:83] releasing machines lock for "embed-certs-114855", held for 24.731922082s
	I0717 19:59:02.604022 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.604447 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:02.608397 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.608991 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.609030 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.609308 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.610102 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.610386 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.610634 1103141 ssh_runner.go:195] Run: cat /version.json
	I0717 19:59:02.610677 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.611009 1103141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:59:02.611106 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.614739 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.615121 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.615253 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.616278 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.616386 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.616802 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.616829 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.617030 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.617096 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.617395 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.617442 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.617597 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.617826 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 19:59:02.618522 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	W0717 19:59:02.745192 1103141 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:59:02.745275 1103141 ssh_runner.go:195] Run: systemctl --version
	I0717 19:59:02.752196 1103141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:59:02.903288 1103141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:59:02.911818 1103141 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:59:02.911917 1103141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:59:02.933786 1103141 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:59:02.933883 1103141 start.go:469] detecting cgroup driver to use...
	I0717 19:59:02.934004 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:59:02.955263 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:59:02.974997 1103141 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:59:02.975077 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:59:02.994203 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:59:03.014446 1103141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:59:03.198307 1103141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:59:03.397392 1103141 docker.go:212] disabling docker service ...
	I0717 19:59:03.397591 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:59:03.418509 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:59:03.437373 1103141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:59:03.613508 1103141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:59:03.739647 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:59:03.754406 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:59:03.777929 1103141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:59:03.778091 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.790606 1103141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:59:03.790721 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.804187 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.817347 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.828813 1103141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:59:03.840430 1103141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:59:03.850240 1103141 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:59:03.850319 1103141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:59:03.865894 1103141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:59:03.882258 1103141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:59:04.017800 1103141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:59:04.248761 1103141 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:59:04.248842 1103141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:59:04.257893 1103141 start.go:537] Will wait 60s for crictl version
	I0717 19:59:04.257984 1103141 ssh_runner.go:195] Run: which crictl
	I0717 19:59:04.264221 1103141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:59:04.305766 1103141 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:59:04.305851 1103141 ssh_runner.go:195] Run: crio --version
	I0717 19:59:04.375479 1103141 ssh_runner.go:195] Run: crio --version
	I0717 19:59:04.436461 1103141 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:59:04.438378 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:04.442194 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:04.442754 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:04.442792 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:04.443221 1103141 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:59:04.448534 1103141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:04.465868 1103141 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:59:04.465946 1103141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:04.502130 1103141 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:59:04.502219 1103141 ssh_runner.go:195] Run: which lz4
	I0717 19:59:04.507394 1103141 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:59:04.512404 1103141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:59:04.512452 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 19:59:04.409929 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:04.419102 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:04.419138 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:04.910761 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:04.919844 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:04.919898 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:05.410298 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:05.424961 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:05.425002 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:05.910377 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:05.924698 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 200:
	ok
	I0717 19:59:05.949272 1102415 api_server.go:141] control plane version: v1.27.3
	I0717 19:59:05.949308 1102415 api_server.go:131] duration metric: took 6.707896837s to wait for apiserver health ...
	I0717 19:59:05.949321 1102415 cni.go:84] Creating CNI manager for ""
	I0717 19:59:05.949334 1102415 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:05.952250 1102415 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:59:02.634580 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Start
	I0717 19:59:02.635005 1101908 main.go:141] libmachine: (old-k8s-version-149000) Ensuring networks are active...
	I0717 19:59:02.635919 1101908 main.go:141] libmachine: (old-k8s-version-149000) Ensuring network default is active
	I0717 19:59:02.636328 1101908 main.go:141] libmachine: (old-k8s-version-149000) Ensuring network mk-old-k8s-version-149000 is active
	I0717 19:59:02.637168 1101908 main.go:141] libmachine: (old-k8s-version-149000) Getting domain xml...
	I0717 19:59:02.638177 1101908 main.go:141] libmachine: (old-k8s-version-149000) Creating domain...
	I0717 19:59:04.249328 1101908 main.go:141] libmachine: (old-k8s-version-149000) Waiting to get IP...
	I0717 19:59:04.250286 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:04.250925 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:04.251047 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:04.250909 1103733 retry.go:31] will retry after 305.194032ms: waiting for machine to come up
	I0717 19:59:04.558456 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:04.559354 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:04.559387 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:04.559290 1103733 retry.go:31] will retry after 338.882261ms: waiting for machine to come up
	I0717 19:59:04.900152 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:04.900673 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:04.900700 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:04.900616 1103733 retry.go:31] will retry after 334.664525ms: waiting for machine to come up
	I0717 19:59:05.236557 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:05.237252 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:05.237280 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:05.237121 1103733 retry.go:31] will retry after 410.314805ms: waiting for machine to come up
	I0717 19:59:05.648936 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:05.649630 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:05.649665 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:05.649572 1103733 retry.go:31] will retry after 482.724985ms: waiting for machine to come up
	I0717 19:59:06.135159 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:06.135923 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:06.135961 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:06.135851 1103733 retry.go:31] will retry after 646.078047ms: waiting for machine to come up
	I0717 19:59:06.783788 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:06.784327 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:06.784352 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:06.784239 1103733 retry.go:31] will retry after 1.176519187s: waiting for machine to come up
	I0717 19:59:05.954319 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:59:06.005185 1102415 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:59:06.070856 1102415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:59:06.086358 1102415 system_pods.go:59] 8 kube-system pods found
	I0717 19:59:06.086429 1102415 system_pods.go:61] "coredns-5d78c9869d-rjqsv" [f27e2de9-9849-40e2-b6dc-1ee27537b1e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:59:06.086448 1102415 system_pods.go:61] "etcd-default-k8s-diff-port-711413" [15f74e3f-61a1-4464-bbff-d336f6df4b6e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:59:06.086462 1102415 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-711413" [c164ab32-6a1d-4079-9b58-7da96eabc60e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:59:06.086481 1102415 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-711413" [7dc149c8-be03-4fd6-b945-c63e95a28470] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:59:06.086498 1102415 system_pods.go:61] "kube-proxy-9qfpg" [ecb84bb9-57a2-4a42-8104-b792d38479ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 19:59:06.086513 1102415 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-711413" [f2cb374f-674e-4a08-82e0-0932b732b485] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:59:06.086526 1102415 system_pods.go:61] "metrics-server-74d5c6b9c-hzcd7" [17e01399-9910-4f01-abe7-3eae271af1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:59:06.086536 1102415 system_pods.go:61] "storage-provisioner" [43705714-97f1-4b06-8eeb-04d60c22112a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 19:59:06.086546 1102415 system_pods.go:74] duration metric: took 15.663084ms to wait for pod list to return data ...
	I0717 19:59:06.086556 1102415 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:59:06.113146 1102415 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:59:06.113186 1102415 node_conditions.go:123] node cpu capacity is 2
	I0717 19:59:06.113203 1102415 node_conditions.go:105] duration metric: took 26.64051ms to run NodePressure ...
	I0717 19:59:06.113228 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:06.757768 1102415 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:59:06.770030 1102415 kubeadm.go:787] kubelet initialised
	I0717 19:59:06.770064 1102415 kubeadm.go:788] duration metric: took 12.262867ms waiting for restarted kubelet to initialise ...
	I0717 19:59:06.770077 1102415 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:06.782569 1102415 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.794688 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.794714 1102415 pod_ready.go:81] duration metric: took 12.110858ms waiting for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.794723 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.794732 1102415 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.812213 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.812265 1102415 pod_ready.go:81] duration metric: took 17.522572ms waiting for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.812281 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.812291 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.838241 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.838291 1102415 pod_ready.go:81] duration metric: took 25.986333ms waiting for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.838306 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.838318 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.869011 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.869127 1102415 pod_ready.go:81] duration metric: took 30.791681ms waiting for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.869155 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.869192 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:07.164422 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-proxy-9qfpg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.164521 1102415 pod_ready.go:81] duration metric: took 295.308967ms waiting for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:07.164549 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-proxy-9qfpg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.164570 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:07.571331 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.571370 1102415 pod_ready.go:81] duration metric: took 406.779012ms waiting for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:07.571383 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.571393 1102415 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:07.967699 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.967740 1102415 pod_ready.go:81] duration metric: took 396.334567ms waiting for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:07.967757 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.967770 1102415 pod_ready.go:38] duration metric: took 1.197678353s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:07.967793 1102415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:59:08.014470 1102415 ops.go:34] apiserver oom_adj: -16
	I0717 19:59:08.014500 1102415 kubeadm.go:640] restartCluster took 22.633851106s
	I0717 19:59:08.014510 1102415 kubeadm.go:406] StartCluster complete in 22.683627985s
	I0717 19:59:08.014534 1102415 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:08.014622 1102415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:59:08.017393 1102415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:08.018018 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:59:08.018126 1102415 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:59:08.018273 1102415 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-711413"
	I0717 19:59:08.018300 1102415 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-711413"
	W0717 19:59:08.018309 1102415 addons.go:240] addon storage-provisioner should already be in state true
	I0717 19:59:08.018404 1102415 host.go:66] Checking if "default-k8s-diff-port-711413" exists ...
	I0717 19:59:08.018400 1102415 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-711413"
	I0717 19:59:08.018457 1102415 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-711413"
	W0717 19:59:08.018471 1102415 addons.go:240] addon metrics-server should already be in state true
	I0717 19:59:08.018538 1102415 host.go:66] Checking if "default-k8s-diff-port-711413" exists ...
	I0717 19:59:08.018864 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.018916 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.018950 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.018997 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.019087 1102415 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-711413"
	I0717 19:59:08.019108 1102415 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-711413"
	I0717 19:59:08.019378 1102415 config.go:182] Loaded profile config "default-k8s-diff-port-711413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:59:08.019724 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.019823 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.028311 1102415 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-711413" context rescaled to 1 replicas
	I0717 19:59:08.028363 1102415 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.51 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:59:08.031275 1102415 out.go:177] * Verifying Kubernetes components...
	I0717 19:59:08.033186 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:59:08.041793 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43183
	I0717 19:59:08.041831 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0717 19:59:08.042056 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0717 19:59:08.042525 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.042709 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.043195 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.043373 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.043382 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.043479 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.043486 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.043911 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.044078 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.044095 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.044514 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.044542 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.044773 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.044878 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.045003 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.045373 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.045401 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.065715 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0717 19:59:08.066371 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.067102 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.067128 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.067662 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.067824 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0717 19:59:08.068091 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.069488 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.070144 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.070163 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.070232 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:59:08.070672 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.070852 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.072648 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:59:08.075752 1102415 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:59:08.077844 1102415 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:04.355036 1102136 addons.go:502] enable addons completed in 3.125961318s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 19:59:06.268158 1102136 node_ready.go:58] node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:08.079803 1102415 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:08.079826 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:59:08.079857 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:59:08.077802 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:59:08.079941 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:59:08.079958 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:59:08.078604 1102415 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-711413"
	W0717 19:59:08.080010 1102415 addons.go:240] addon default-storageclass should already be in state true
	I0717 19:59:08.080048 1102415 host.go:66] Checking if "default-k8s-diff-port-711413" exists ...
	I0717 19:59:08.080446 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.080498 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.084746 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.084836 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.085468 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:59:08.085502 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:59:08.085512 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.085534 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.085599 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:59:08.085738 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:59:08.085851 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:59:08.085998 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:59:08.086028 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:59:08.086182 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:59:08.086221 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:59:08.086298 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:59:08.103113 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41455
	I0717 19:59:08.103751 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.104389 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.104412 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.104985 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.105805 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.105846 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.127906 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I0717 19:59:08.129757 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.130713 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.130734 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.131175 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.133060 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.135496 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:59:08.135824 1102415 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:08.135840 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:59:08.135860 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:59:08.139031 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.139443 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:59:08.139480 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.139855 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:59:08.140455 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:59:08.140850 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:59:08.141145 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:59:08.260742 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:59:08.260779 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:59:08.310084 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:59:08.310123 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:59:08.315228 1102415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:08.333112 1102415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:08.347265 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:08.347297 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:59:08.446018 1102415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:08.602418 1102415 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:59:08.602481 1102415 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-711413" to be "Ready" ...
	I0717 19:59:06.789410 1103141 crio.go:444] Took 2.282067 seconds to copy over tarball
	I0717 19:59:06.789500 1103141 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:59:10.614919 1103141 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.825382729s)
	I0717 19:59:10.614956 1103141 crio.go:451] Took 3.825512 seconds to extract the tarball
	I0717 19:59:10.614970 1103141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:59:10.668773 1103141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:10.721815 1103141 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:59:10.721849 1103141 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:59:10.721928 1103141 ssh_runner.go:195] Run: crio config
	I0717 19:59:10.626470 1102415 node_ready.go:58] node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:11.522603 1102415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.189445026s)
	I0717 19:59:11.522668 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.522681 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.522703 1102415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.207433491s)
	I0717 19:59:11.522747 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.522762 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.523183 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.523208 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.523223 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.523234 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.523247 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.523700 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.523717 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.523768 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.525232 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.525259 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.525269 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.525278 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.526823 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.526841 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.526864 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.526878 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.526889 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.527158 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.527174 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.527190 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.546758 1102415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.100689574s)
	I0717 19:59:11.546840 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.546856 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.548817 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.548900 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.548920 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.548946 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.548966 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.549341 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.549360 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.549374 1102415 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-711413"
	I0717 19:59:11.549385 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.629748 1102415 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:59:07.962879 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:07.963461 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:07.963494 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:07.963408 1103733 retry.go:31] will retry after 1.458776494s: waiting for machine to come up
	I0717 19:59:09.423815 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:09.424545 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:09.424578 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:09.424434 1103733 retry.go:31] will retry after 1.505416741s: waiting for machine to come up
	I0717 19:59:10.932450 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:10.932970 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:10.932999 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:10.932907 1103733 retry.go:31] will retry after 2.119238731s: waiting for machine to come up
	I0717 19:59:08.762965 1102136 node_ready.go:49] node "no-preload-408472" has status "Ready":"True"
	I0717 19:59:08.762999 1102136 node_ready.go:38] duration metric: took 7.016711148s waiting for node "no-preload-408472" to be "Ready" ...
	I0717 19:59:08.763010 1102136 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:08.770929 1102136 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.781876 1102136 pod_ready.go:92] pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:08.781916 1102136 pod_ready.go:81] duration metric: took 10.948677ms waiting for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.781931 1102136 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.790806 1102136 pod_ready.go:92] pod "etcd-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:08.790842 1102136 pod_ready.go:81] duration metric: took 8.902354ms waiting for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.790858 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:11.107348 1102136 pod_ready.go:102] pod "kube-apiserver-no-preload-408472" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:12.306923 1102136 pod_ready.go:92] pod "kube-apiserver-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.306956 1102136 pod_ready.go:81] duration metric: took 3.516087536s waiting for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.306971 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.314504 1102136 pod_ready.go:92] pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.314541 1102136 pod_ready.go:81] duration metric: took 7.560269ms waiting for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.314557 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.323200 1102136 pod_ready.go:92] pod "kube-proxy-cntdn" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.323232 1102136 pod_ready.go:81] duration metric: took 8.667115ms waiting for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.323246 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.367453 1102136 pod_ready.go:92] pod "kube-scheduler-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.367483 1102136 pod_ready.go:81] duration metric: took 44.229894ms waiting for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.367494 1102136 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:11.776332 1102415 addons.go:502] enable addons completed in 3.758222459s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:59:13.118285 1102415 node_ready.go:58] node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:10.806964 1103141 cni.go:84] Creating CNI manager for ""
	I0717 19:59:10.907820 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:10.908604 1103141 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:59:10.908671 1103141 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-114855 NodeName:embed-certs-114855 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:59:10.909456 1103141 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-114855"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:59:10.909661 1103141 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-114855 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-114855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:59:10.909757 1103141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:59:10.933995 1103141 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:59:10.934116 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:59:10.949424 1103141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0717 19:59:10.971981 1103141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:59:10.995942 1103141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0717 19:59:11.021147 1103141 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I0717 19:59:11.027824 1103141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:11.046452 1103141 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855 for IP: 192.168.39.213
	I0717 19:59:11.046507 1103141 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:11.046722 1103141 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:59:11.046792 1103141 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:59:11.046890 1103141 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/client.key
	I0717 19:59:11.046974 1103141 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/apiserver.key.af9d86f2
	I0717 19:59:11.047032 1103141 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/proxy-client.key
	I0717 19:59:11.047198 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:59:11.047246 1103141 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:59:11.047262 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:59:11.047297 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:59:11.047330 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:59:11.047360 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:59:11.047422 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:11.048308 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:59:11.076826 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:59:11.116981 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:59:11.152433 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:59:11.186124 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:59:11.219052 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:59:11.251034 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:59:11.281026 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:59:11.314219 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:59:11.341636 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:59:11.372920 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:59:11.403343 1103141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:59:11.428094 1103141 ssh_runner.go:195] Run: openssl version
	I0717 19:59:11.435909 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:59:11.455770 1103141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:11.463749 1103141 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:11.463851 1103141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:11.473784 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:59:11.490867 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:59:11.507494 1103141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:59:11.514644 1103141 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:59:11.514746 1103141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:59:11.523975 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:59:11.539528 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:59:11.552649 1103141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:59:11.559671 1103141 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:59:11.559757 1103141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:59:11.569190 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:59:11.584473 1103141 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:59:11.590453 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:59:11.599427 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:59:11.607503 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:59:11.619641 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:59:11.627914 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:59:11.636600 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:59:11.645829 1103141 kubeadm.go:404] StartCluster: {Name:embed-certs-114855 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-114855 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:59:11.645960 1103141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:59:11.646049 1103141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:11.704959 1103141 cri.go:89] found id: ""
	I0717 19:59:11.705078 1103141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:59:11.720588 1103141 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:59:11.720621 1103141 kubeadm.go:636] restartCluster start
	I0717 19:59:11.720697 1103141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:59:11.734693 1103141 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:11.736236 1103141 kubeconfig.go:92] found "embed-certs-114855" server: "https://192.168.39.213:8443"
	I0717 19:59:11.739060 1103141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:59:11.752975 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:11.753096 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:11.766287 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:12.266751 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:12.266867 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:12.281077 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:12.766565 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:12.766669 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:12.780460 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:13.267185 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:13.267305 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:13.286250 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:13.766474 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:13.766582 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:13.780973 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:14.266474 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:14.266565 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:14.283412 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:14.766783 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:14.766885 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:14.782291 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:15.266607 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:15.266721 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:15.279993 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:13.054320 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:13.054787 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:13.054821 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:13.054724 1103733 retry.go:31] will retry after 2.539531721s: waiting for machine to come up
	I0717 19:59:15.597641 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:15.598199 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:15.598235 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:15.598132 1103733 retry.go:31] will retry after 3.376944775s: waiting for machine to come up
	I0717 19:59:14.773506 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:16.778529 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:14.611538 1102415 node_ready.go:49] node "default-k8s-diff-port-711413" has status "Ready":"True"
	I0717 19:59:14.611573 1102415 node_ready.go:38] duration metric: took 6.009046151s waiting for node "default-k8s-diff-port-711413" to be "Ready" ...
	I0717 19:59:14.611583 1102415 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:14.620522 1102415 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.629345 1102415 pod_ready.go:92] pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:14.629380 1102415 pod_ready.go:81] duration metric: took 8.831579ms waiting for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.629394 1102415 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.636756 1102415 pod_ready.go:92] pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:14.636781 1102415 pod_ready.go:81] duration metric: took 7.379506ms waiting for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.636791 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.658668 1102415 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:16.658699 1102415 pod_ready.go:81] duration metric: took 2.021899463s waiting for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.658715 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.667666 1102415 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:16.667695 1102415 pod_ready.go:81] duration metric: took 8.971091ms waiting for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.667709 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.677402 1102415 pod_ready.go:92] pod "kube-proxy-9qfpg" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:16.677433 1102415 pod_ready.go:81] duration metric: took 9.71529ms waiting for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.677448 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:17.011304 1102415 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:17.011332 1102415 pod_ready.go:81] duration metric: took 333.876392ms waiting for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:17.011344 1102415 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:15.766793 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:15.766913 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:15.780587 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:16.266363 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:16.266491 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:16.281228 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:16.766575 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:16.766690 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:16.782127 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:17.266511 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:17.266610 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:17.282119 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:17.766652 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:17.766758 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:17.783972 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:18.266759 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:18.266855 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:18.284378 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:18.766574 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:18.766675 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:18.782934 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:19.266475 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:19.266577 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:19.280895 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:19.767307 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:19.767411 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:19.781007 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:20.266522 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:20.266646 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:20.280722 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:18.976814 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:18.977300 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:18.977326 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:18.977254 1103733 retry.go:31] will retry after 2.728703676s: waiting for machine to come up
	I0717 19:59:21.709422 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:21.709889 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:21.709916 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:21.709841 1103733 retry.go:31] will retry after 5.373130791s: waiting for machine to come up
	I0717 19:59:19.273610 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:21.274431 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:19.419889 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:21.422395 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:23.423974 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:20.767398 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:20.767505 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:20.780641 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:21.266963 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:21.267053 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:21.280185 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:21.753855 1103141 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:59:21.753890 1103141 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:59:21.753905 1103141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:59:21.753969 1103141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:21.792189 1103141 cri.go:89] found id: ""
	I0717 19:59:21.792276 1103141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:59:21.809670 1103141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:59:21.820341 1103141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:59:21.820408 1103141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:21.830164 1103141 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:21.830194 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:21.961988 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:22.788248 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:23.013910 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:23.110334 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:23.204343 1103141 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:59:23.204448 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:23.721708 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:24.222046 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:24.721482 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:25.221523 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:25.721720 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:23.773347 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:26.275805 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:25.424115 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:27.920288 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:27.084831 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.085274 1101908 main.go:141] libmachine: (old-k8s-version-149000) Found IP for machine: 192.168.50.177
	I0717 19:59:27.085299 1101908 main.go:141] libmachine: (old-k8s-version-149000) Reserving static IP address...
	I0717 19:59:27.085332 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has current primary IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.085757 1101908 main.go:141] libmachine: (old-k8s-version-149000) Reserved static IP address: 192.168.50.177
	I0717 19:59:27.085799 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "old-k8s-version-149000", mac: "52:54:00:88:d8:03", ip: "192.168.50.177"} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.085821 1101908 main.go:141] libmachine: (old-k8s-version-149000) Waiting for SSH to be available...
	I0717 19:59:27.085855 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | skip adding static IP to network mk-old-k8s-version-149000 - found existing host DHCP lease matching {name: "old-k8s-version-149000", mac: "52:54:00:88:d8:03", ip: "192.168.50.177"}
	I0717 19:59:27.085880 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Getting to WaitForSSH function...
	I0717 19:59:27.088245 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.088569 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.088605 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.088777 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Using SSH client type: external
	I0717 19:59:27.088809 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa (-rw-------)
	I0717 19:59:27.088850 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:59:27.088866 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | About to run SSH command:
	I0717 19:59:27.088877 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | exit 0
	I0717 19:59:27.186039 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | SSH cmd err, output: <nil>: 
	I0717 19:59:27.186549 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetConfigRaw
	I0717 19:59:27.187427 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:27.190317 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.190738 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.190781 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.191089 1101908 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/config.json ...
	I0717 19:59:27.191343 1101908 machine.go:88] provisioning docker machine ...
	I0717 19:59:27.191369 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:27.191637 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetMachineName
	I0717 19:59:27.191875 1101908 buildroot.go:166] provisioning hostname "old-k8s-version-149000"
	I0717 19:59:27.191902 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetMachineName
	I0717 19:59:27.192058 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.194710 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.195141 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.195190 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.195472 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.195752 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.195938 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.196104 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.196308 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:27.196731 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:27.196746 1101908 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-149000 && echo "old-k8s-version-149000" | sudo tee /etc/hostname
	I0717 19:59:27.338648 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-149000
	
	I0717 19:59:27.338712 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.341719 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.342138 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.342176 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.342392 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.342666 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.342879 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.343036 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.343216 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:27.343733 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:27.343763 1101908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-149000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-149000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-149000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:59:27.478006 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:59:27.478054 1101908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:59:27.478109 1101908 buildroot.go:174] setting up certificates
	I0717 19:59:27.478130 1101908 provision.go:83] configureAuth start
	I0717 19:59:27.478150 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetMachineName
	I0717 19:59:27.478485 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:27.481425 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.481865 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.481900 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.482029 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.484825 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.485290 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.485326 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.485505 1101908 provision.go:138] copyHostCerts
	I0717 19:59:27.485604 1101908 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:59:27.485633 1101908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:59:27.485709 1101908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:59:27.485837 1101908 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:59:27.485849 1101908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:59:27.485879 1101908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:59:27.485957 1101908 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:59:27.485970 1101908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:59:27.485997 1101908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:59:27.486131 1101908 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-149000 san=[192.168.50.177 192.168.50.177 localhost 127.0.0.1 minikube old-k8s-version-149000]
	I0717 19:59:27.667436 1101908 provision.go:172] copyRemoteCerts
	I0717 19:59:27.667514 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:59:27.667551 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.670875 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.671304 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.671340 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.671600 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.671851 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.672053 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.672222 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 19:59:27.764116 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:59:27.795726 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 19:59:27.827532 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:59:27.859734 1101908 provision.go:86] duration metric: configureAuth took 381.584228ms
	I0717 19:59:27.859769 1101908 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:59:27.860014 1101908 config.go:182] Loaded profile config "old-k8s-version-149000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 19:59:27.860125 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.863330 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.863915 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.863969 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.864318 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.864559 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.864735 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.864931 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.865114 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:27.865768 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:27.865791 1101908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:59:28.221755 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:59:28.221788 1101908 machine.go:91] provisioned docker machine in 1.030429206s
	I0717 19:59:28.221802 1101908 start.go:300] post-start starting for "old-k8s-version-149000" (driver="kvm2")
	I0717 19:59:28.221817 1101908 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:59:28.221868 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.222236 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:59:28.222265 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.225578 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.226092 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.226130 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.226268 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.226511 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.226695 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.226875 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 19:59:28.321338 1101908 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:59:28.326703 1101908 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:59:28.326747 1101908 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:59:28.326843 1101908 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:59:28.326969 1101908 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:59:28.327239 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:59:28.337536 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:28.366439 1101908 start.go:303] post-start completed in 144.619105ms
	I0717 19:59:28.366476 1101908 fix.go:56] fixHost completed within 25.763256574s
	I0717 19:59:28.366510 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.369661 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.370194 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.370249 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.370470 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.370758 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.370956 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.371192 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.371476 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:28.371943 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:28.371970 1101908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:59:28.498983 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623968.431200547
	
	I0717 19:59:28.499015 1101908 fix.go:206] guest clock: 1689623968.431200547
	I0717 19:59:28.499025 1101908 fix.go:219] Guest: 2023-07-17 19:59:28.431200547 +0000 UTC Remote: 2023-07-17 19:59:28.366482535 +0000 UTC m=+386.593094928 (delta=64.718012ms)
	I0717 19:59:28.499083 1101908 fix.go:190] guest clock delta is within tolerance: 64.718012ms
	I0717 19:59:28.499090 1101908 start.go:83] releasing machines lock for "old-k8s-version-149000", held for 25.895913429s
	I0717 19:59:28.499122 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.499449 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:28.502760 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.503338 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.503395 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.503746 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.504549 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.504804 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.504907 1101908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:59:28.504995 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.505142 1101908 ssh_runner.go:195] Run: cat /version.json
	I0717 19:59:28.505175 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.508832 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.508868 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.509347 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.509384 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.509412 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.509431 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.509539 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.509827 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.509888 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.510074 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.510126 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.510292 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.510284 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 19:59:28.510442 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	W0717 19:59:28.604171 1101908 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:59:28.604283 1101908 ssh_runner.go:195] Run: systemctl --version
	I0717 19:59:28.637495 1101908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:59:28.790306 1101908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:59:28.797261 1101908 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:59:28.797343 1101908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:59:28.822016 1101908 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:59:28.822056 1101908 start.go:469] detecting cgroup driver to use...
	I0717 19:59:28.822144 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:59:28.843785 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:59:28.863178 1101908 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:59:28.863248 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:59:28.880265 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:59:28.897122 1101908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:59:29.019759 1101908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:59:29.166490 1101908 docker.go:212] disabling docker service ...
	I0717 19:59:29.166561 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:59:29.188125 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:59:29.205693 1101908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:59:29.336805 1101908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:59:29.478585 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:59:29.494755 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:59:29.516478 1101908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0717 19:59:29.516633 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.527902 1101908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:59:29.528000 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.539443 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.551490 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.563407 1101908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:59:29.577575 1101908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:59:29.587749 1101908 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:59:29.587839 1101908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:59:29.602120 1101908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:59:29.613647 1101908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:59:29.730721 1101908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:59:29.907780 1101908 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:59:29.907916 1101908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:59:29.913777 1101908 start.go:537] Will wait 60s for crictl version
	I0717 19:59:29.913855 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:29.921083 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:59:29.955985 1101908 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:59:29.956099 1101908 ssh_runner.go:195] Run: crio --version
	I0717 19:59:30.011733 1101908 ssh_runner.go:195] Run: crio --version
	I0717 19:59:30.068591 1101908 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0717 19:59:25.744228 1103141 api_server.go:72] duration metric: took 2.539876638s to wait for apiserver process to appear ...
	I0717 19:59:25.744263 1103141 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:59:25.744295 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:25.744850 1103141 api_server.go:269] stopped: https://192.168.39.213:8443/healthz: Get "https://192.168.39.213:8443/healthz": dial tcp 192.168.39.213:8443: connect: connection refused
	I0717 19:59:26.245930 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.163298 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:29.163345 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:29.163362 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.197738 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:29.197812 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:29.245946 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.261723 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:29.261777 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:29.745343 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.753999 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:29.754040 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:30.245170 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:30.253748 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:30.253809 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:30.745290 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:30.760666 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:30.760706 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:31.244952 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:31.262412 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0717 19:59:31.284253 1103141 api_server.go:141] control plane version: v1.27.3
	I0717 19:59:31.284290 1103141 api_server.go:131] duration metric: took 5.540019245s to wait for apiserver health ...
	I0717 19:59:31.284303 1103141 cni.go:84] Creating CNI manager for ""
	I0717 19:59:31.284316 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:31.286828 1103141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:59:30.070665 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:30.074049 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:30.074479 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:30.074503 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:30.074871 1101908 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 19:59:30.080177 1101908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:30.094479 1101908 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 19:59:30.094543 1101908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:30.130526 1101908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 19:59:30.130599 1101908 ssh_runner.go:195] Run: which lz4
	I0717 19:59:30.135920 1101908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:59:30.140678 1101908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:59:30.140723 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0717 19:59:28.772996 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:30.785175 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:33.273857 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:30.427017 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:32.920586 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:31.288869 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:59:31.323116 1103141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:59:31.368054 1103141 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:59:31.392061 1103141 system_pods.go:59] 8 kube-system pods found
	I0717 19:59:31.392110 1103141 system_pods.go:61] "coredns-5d78c9869d-rgdz8" [d1cc8cd3-70eb-4315-89d9-40d4ef97a5c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:59:31.392122 1103141 system_pods.go:61] "etcd-embed-certs-114855" [4c8e5fe0-e26e-4244-b284-5a42b4247614] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:59:31.392136 1103141 system_pods.go:61] "kube-apiserver-embed-certs-114855" [3cc43f5e-6c56-4587-a69a-ce58c12f500d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:59:31.392146 1103141 system_pods.go:61] "kube-controller-manager-embed-certs-114855" [cadca801-1feb-45f9-ac3c-eca697f1919f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:59:31.392157 1103141 system_pods.go:61] "kube-proxy-lkncr" [9ec4e4ac-81a5-4547-ab3e-6a3db21cc19d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 19:59:31.392166 1103141 system_pods.go:61] "kube-scheduler-embed-certs-114855" [0e9a0435-a1d5-42bc-a051-1587cd479ac6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:59:31.392184 1103141 system_pods.go:61] "metrics-server-74d5c6b9c-pshr5" [2d4e6b33-c325-4aa5-8458-b604be762cbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:59:31.392192 1103141 system_pods.go:61] "storage-provisioner" [4f7b39f3-3fc5-4e41-9f58-aa1d938ce06f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 19:59:31.392199 1103141 system_pods.go:74] duration metric: took 24.119934ms to wait for pod list to return data ...
	I0717 19:59:31.392210 1103141 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:59:31.405136 1103141 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:59:31.405178 1103141 node_conditions.go:123] node cpu capacity is 2
	I0717 19:59:31.405192 1103141 node_conditions.go:105] duration metric: took 12.975462ms to run NodePressure ...
	I0717 19:59:31.405221 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:32.158757 1103141 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:59:32.167221 1103141 kubeadm.go:787] kubelet initialised
	I0717 19:59:32.167263 1103141 kubeadm.go:788] duration metric: took 8.462047ms waiting for restarted kubelet to initialise ...
	I0717 19:59:32.167277 1103141 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:32.178888 1103141 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:34.199125 1103141 pod_ready.go:102] pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:32.017439 1101908 crio.go:444] Took 1.881555 seconds to copy over tarball
	I0717 19:59:32.017535 1101908 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:59:35.573024 1101908 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.55545349s)
	I0717 19:59:35.573070 1101908 crio.go:451] Took 3.555594 seconds to extract the tarball
	I0717 19:59:35.573081 1101908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:59:35.622240 1101908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:35.672113 1101908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 19:59:35.672149 1101908 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:59:35.672223 1101908 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:35.672279 1101908 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:35.672325 1101908 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.672344 1101908 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:35.672453 1101908 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:35.672533 1101908 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 19:59:35.672545 1101908 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:35.672645 1101908 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 19:59:35.674063 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:35.674110 1101908 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 19:59:35.674127 1101908 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.674114 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:35.674068 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:35.674075 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:35.674208 1101908 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:35.674236 1101908 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 19:59:35.835219 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.840811 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:35.855242 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 19:59:35.857212 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:35.860547 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:35.864234 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:35.864519 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 19:59:35.958693 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:35.980110 1101908 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 19:59:35.980198 1101908 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.980258 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.057216 1101908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 19:59:36.057278 1101908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:36.057301 1101908 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 19:59:36.057334 1101908 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0717 19:59:36.057342 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.057362 1101908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 19:59:36.057383 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.057412 1101908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:36.057451 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.066796 1101908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 19:59:36.066859 1101908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:36.066944 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.084336 1101908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 19:59:36.084398 1101908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:36.084439 1101908 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 19:59:36.084454 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.084479 1101908 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 19:59:36.084520 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.208377 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:36.208641 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:36.208730 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0717 19:59:36.208827 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:36.208839 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:36.208879 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:36.208922 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0717 19:59:36.375090 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 19:59:36.375371 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 19:59:36.383660 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 19:59:36.383770 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 19:59:36.383841 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 19:59:36.383872 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 19:59:36.383950 1101908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0717 19:59:36.383986 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 19:59:36.388877 1101908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0717 19:59:36.388897 1101908 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0717 19:59:36.388941 1101908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0717 19:59:35.275990 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:37.773385 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:34.927926 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:36.940406 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:36.219570 1103141 pod_ready.go:102] pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:37.338137 1103141 pod_ready.go:92] pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:37.338209 1103141 pod_ready.go:81] duration metric: took 5.159283632s waiting for pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:37.338228 1103141 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:39.354623 1103141 pod_ready.go:102] pod "etcd-embed-certs-114855" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:37.751639 1101908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.362667245s)
	I0717 19:59:37.751681 1101908 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0717 19:59:37.751736 1101908 cache_images.go:92] LoadImages completed in 2.079569378s
	W0717 19:59:37.751899 1101908 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0717 19:59:37.752005 1101908 ssh_runner.go:195] Run: crio config
	I0717 19:59:37.844809 1101908 cni.go:84] Creating CNI manager for ""
	I0717 19:59:37.844845 1101908 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:37.844870 1101908 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:59:37.844896 1101908 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.177 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-149000 NodeName:old-k8s-version-149000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 19:59:37.845116 1101908 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-149000"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-149000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.177:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:59:37.845228 1101908 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-149000 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-149000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:59:37.845312 1101908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 19:59:37.859556 1101908 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:59:37.859640 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:59:37.872740 1101908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 19:59:37.891132 1101908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:59:37.911902 1101908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0717 19:59:37.933209 1101908 ssh_runner.go:195] Run: grep 192.168.50.177	control-plane.minikube.internal$ /etc/hosts
	I0717 19:59:37.937317 1101908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:37.950660 1101908 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000 for IP: 192.168.50.177
	I0717 19:59:37.950706 1101908 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:37.950921 1101908 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:59:37.950998 1101908 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:59:37.951128 1101908 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.key
	I0717 19:59:37.951227 1101908 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/apiserver.key.c699d2bc
	I0717 19:59:37.951298 1101908 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/proxy-client.key
	I0717 19:59:37.951487 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:59:37.951529 1101908 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:59:37.951541 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:59:37.951567 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:59:37.951593 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:59:37.951634 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:59:37.951691 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:37.952597 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:59:37.980488 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:59:38.008389 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:59:38.037605 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:59:38.066142 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:59:38.095838 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:59:38.123279 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:59:38.158528 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:59:38.190540 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:59:38.218519 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:59:38.245203 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:59:38.273077 1101908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:59:38.292610 1101908 ssh_runner.go:195] Run: openssl version
	I0717 19:59:38.298983 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:59:38.311477 1101908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:59:38.316847 1101908 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:59:38.316914 1101908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:59:38.323114 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:59:38.334773 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:59:38.346327 1101908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:59:38.351639 1101908 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:59:38.351712 1101908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:59:38.357677 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:59:38.369278 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:59:38.380948 1101908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:38.386116 1101908 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:38.386181 1101908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:38.392204 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:59:38.404677 1101908 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:59:38.409861 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:59:38.416797 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:59:38.424606 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:59:38.431651 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:59:38.439077 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:59:38.445660 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:59:38.452464 1101908 kubeadm.go:404] StartCluster: {Name:old-k8s-version-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-149000 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.177 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:59:38.452656 1101908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:59:38.452738 1101908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:38.485873 1101908 cri.go:89] found id: ""
	I0717 19:59:38.485972 1101908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:59:38.496998 1101908 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:59:38.497033 1101908 kubeadm.go:636] restartCluster start
	I0717 19:59:38.497096 1101908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:59:38.508054 1101908 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:38.509416 1101908 kubeconfig.go:92] found "old-k8s-version-149000" server: "https://192.168.50.177:8443"
	I0717 19:59:38.512586 1101908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:59:38.524412 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:38.524486 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:38.537824 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:39.038221 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:39.038331 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:39.053301 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:39.538741 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:39.538834 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:39.552525 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:40.038056 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:40.038173 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:40.052410 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:40.537953 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:40.538060 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:40.551667 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:41.038241 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:41.038361 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:41.053485 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:41.538300 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:41.538402 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:41.552741 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:39.773598 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:42.273083 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:39.423700 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:41.918498 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:43.918876 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:40.856641 1103141 pod_ready.go:92] pod "etcd-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:40.856671 1103141 pod_ready.go:81] duration metric: took 3.518433579s waiting for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:40.856684 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.377156 1103141 pod_ready.go:92] pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.377186 1103141 pod_ready.go:81] duration metric: took 1.520494525s waiting for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.377196 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.387651 1103141 pod_ready.go:92] pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.387680 1103141 pod_ready.go:81] duration metric: took 10.47667ms waiting for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.387692 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lkncr" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.394735 1103141 pod_ready.go:92] pod "kube-proxy-lkncr" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.394770 1103141 pod_ready.go:81] duration metric: took 7.070744ms waiting for pod "kube-proxy-lkncr" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.394784 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.402496 1103141 pod_ready.go:92] pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.402530 1103141 pod_ready.go:81] duration metric: took 7.737273ms waiting for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.402544 1103141 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:44.460075 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:42.038941 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:42.039027 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:42.054992 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:42.538144 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:42.538257 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:42.552160 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:43.038484 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:43.038599 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:43.052649 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:43.538407 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:43.538511 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:43.552927 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:44.038266 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:44.038396 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:44.051851 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:44.538425 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:44.538520 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:44.551726 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:45.038244 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:45.038359 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:45.053215 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:45.538908 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:45.539008 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:45.552009 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:46.038089 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:46.038204 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:46.051955 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:46.538209 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:46.538311 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:46.552579 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:44.273154 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:46.772548 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:45.919143 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:47.919930 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:46.964219 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:49.459411 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:47.038345 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:47.038434 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:47.051506 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:47.538770 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:47.538855 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:47.551813 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:48.038766 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:48.038900 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:48.053717 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:48.524471 1101908 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:59:48.524521 1101908 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:59:48.524542 1101908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:59:48.524625 1101908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:48.564396 1101908 cri.go:89] found id: ""
	I0717 19:59:48.564475 1101908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:59:48.582891 1101908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:59:48.594121 1101908 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:59:48.594212 1101908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:48.604963 1101908 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:48.604998 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:48.756875 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:49.645754 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:49.876047 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:49.996960 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:50.109251 1101908 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:59:50.109337 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:50.630868 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:51.130836 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:51.630446 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:51.659578 1101908 api_server.go:72] duration metric: took 1.550325604s to wait for apiserver process to appear ...
	I0717 19:59:51.659605 1101908 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:59:51.659625 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:48.773967 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:50.775054 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:53.274949 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:49.922365 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:52.422385 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:51.459819 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:53.958809 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:56.660515 1101908 api_server.go:269] stopped: https://192.168.50.177:8443/healthz: Get "https://192.168.50.177:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 19:59:55.773902 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:58.274862 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:54.427715 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:56.922668 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:57.161458 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:57.720749 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:57.720797 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:57.720816 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:57.828454 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:57.828489 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:58.160896 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:58.173037 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 19:59:58.173072 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 19:59:58.660738 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:58.672508 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 19:59:58.672551 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 19:59:59.161133 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:59.169444 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 200:
	ok
	I0717 19:59:59.179637 1101908 api_server.go:141] control plane version: v1.16.0
	I0717 19:59:59.179675 1101908 api_server.go:131] duration metric: took 7.520063574s to wait for apiserver health ...
	I0717 19:59:59.179689 1101908 cni.go:84] Creating CNI manager for ""
	I0717 19:59:59.179703 1101908 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:59.182357 1101908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:59:55.959106 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:58.458415 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:00.458582 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:59.184702 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:59:59.197727 1101908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:59:59.226682 1101908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:59:59.237874 1101908 system_pods.go:59] 7 kube-system pods found
	I0717 19:59:59.237911 1101908 system_pods.go:61] "coredns-5644d7b6d9-g7fjx" [f9f27bce-aaf6-43f8-8a4b-a87230ceed0e] Running
	I0717 19:59:59.237917 1101908 system_pods.go:61] "etcd-old-k8s-version-149000" [2c732d6d-8a38-401d-aebf-e439c7fcf530] Running
	I0717 19:59:59.237922 1101908 system_pods.go:61] "kube-apiserver-old-k8s-version-149000" [b7f2c355-86cd-4d4c-b7da-043094174829] Running
	I0717 19:59:59.237927 1101908 system_pods.go:61] "kube-controller-manager-old-k8s-version-149000" [30f723aa-a978-4fbb-9210-43a29284aa41] Running
	I0717 19:59:59.237931 1101908 system_pods.go:61] "kube-proxy-f68hg" [a39dea78-e9bb-4f1b-8615-a51a42c6d13f] Running
	I0717 19:59:59.237935 1101908 system_pods.go:61] "kube-scheduler-old-k8s-version-149000" [a84bce5d-82af-4282-a36f-0d1031715a1a] Running
	I0717 19:59:59.237938 1101908 system_pods.go:61] "storage-provisioner" [c5e96cda-ddbc-4d29-86c3-d7ac4c717f61] Running
	I0717 19:59:59.237944 1101908 system_pods.go:74] duration metric: took 11.222716ms to wait for pod list to return data ...
	I0717 19:59:59.237952 1101908 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:59:59.241967 1101908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:59:59.242003 1101908 node_conditions.go:123] node cpu capacity is 2
	I0717 19:59:59.242051 1101908 node_conditions.go:105] duration metric: took 4.091279ms to run NodePressure ...
	I0717 19:59:59.242080 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:59.612659 1101908 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:59:59.623317 1101908 retry.go:31] will retry after 338.189596ms: kubelet not initialised
	I0717 19:59:59.972718 1101908 retry.go:31] will retry after 522.339878ms: kubelet not initialised
	I0717 20:00:00.503134 1101908 retry.go:31] will retry after 523.863562ms: kubelet not initialised
	I0717 20:00:01.032819 1101908 retry.go:31] will retry after 993.099088ms: kubelet not initialised
	I0717 20:00:00.773342 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:02.775558 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:59.424228 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:01.424791 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:03.920321 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:02.462125 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:04.960081 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:02.031287 1101908 retry.go:31] will retry after 1.744721946s: kubelet not initialised
	I0717 20:00:03.780335 1101908 retry.go:31] will retry after 2.704259733s: kubelet not initialised
	I0717 20:00:06.491260 1101908 retry.go:31] will retry after 2.934973602s: kubelet not initialised
	I0717 20:00:05.273963 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:07.772710 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:06.428014 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:08.920105 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:07.459314 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:09.959084 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:09.433009 1101908 retry.go:31] will retry after 2.28873038s: kubelet not initialised
	I0717 20:00:11.729010 1101908 retry.go:31] will retry after 4.261199393s: kubelet not initialised
	I0717 20:00:09.772754 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:11.773102 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:11.424610 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:13.922384 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:11.959437 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:14.459152 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:15.999734 1101908 retry.go:31] will retry after 8.732603244s: kubelet not initialised
	I0717 20:00:14.278965 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:16.772786 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:16.424980 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:18.919729 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:16.460363 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:18.960012 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:18.773609 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:21.272529 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:23.272642 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:20.922495 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:23.422032 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:21.460808 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:23.959242 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:24.739282 1101908 retry.go:31] will retry after 8.040459769s: kubelet not initialised
	I0717 20:00:25.274297 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:27.773410 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:25.923167 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:28.420939 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:25.959431 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:27.960549 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:30.459601 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:30.274460 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.276595 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:30.428741 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.919601 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.459855 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:34.960084 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.784544 1101908 kubeadm.go:787] kubelet initialised
	I0717 20:00:32.784571 1101908 kubeadm.go:788] duration metric: took 33.171875609s waiting for restarted kubelet to initialise ...
	I0717 20:00:32.784579 1101908 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:00:32.789500 1101908 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9x6xd" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.795369 1101908 pod_ready.go:92] pod "coredns-5644d7b6d9-9x6xd" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.795396 1101908 pod_ready.go:81] duration metric: took 5.860061ms waiting for pod "coredns-5644d7b6d9-9x6xd" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.795406 1101908 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-g7fjx" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.800899 1101908 pod_ready.go:92] pod "coredns-5644d7b6d9-g7fjx" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.800922 1101908 pod_ready.go:81] duration metric: took 5.509805ms waiting for pod "coredns-5644d7b6d9-g7fjx" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.800931 1101908 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.806100 1101908 pod_ready.go:92] pod "etcd-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.806123 1101908 pod_ready.go:81] duration metric: took 5.185189ms waiting for pod "etcd-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.806139 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.810963 1101908 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.810990 1101908 pod_ready.go:81] duration metric: took 4.843622ms waiting for pod "kube-apiserver-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.811000 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.183907 1101908 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:33.183945 1101908 pod_ready.go:81] duration metric: took 372.931164ms waiting for pod "kube-controller-manager-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.183961 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f68hg" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.585028 1101908 pod_ready.go:92] pod "kube-proxy-f68hg" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:33.585064 1101908 pod_ready.go:81] duration metric: took 401.095806ms waiting for pod "kube-proxy-f68hg" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.585075 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.984668 1101908 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:33.984702 1101908 pod_ready.go:81] duration metric: took 399.618516ms waiting for pod "kube-scheduler-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.984719 1101908 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:36.392779 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:34.774126 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:37.273706 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:34.921839 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:37.434861 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:37.460518 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:39.960345 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:38.393483 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:40.893085 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:39.773390 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:41.773759 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:39.920512 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:41.920773 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:43.921648 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:42.458830 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:44.958864 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:43.393911 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:45.395481 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:44.273504 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:46.772509 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:45.923812 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:48.422996 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:47.459707 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:49.960056 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:47.892578 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:50.393881 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:48.774960 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:51.273048 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:50.919768 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:52.920372 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:52.458962 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:54.460345 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:52.892172 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:54.893802 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:53.775343 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:56.272701 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:55.427664 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:57.919163 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:56.961203 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:59.458439 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:57.393429 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:59.892089 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:58.772852 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:00.773814 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:03.272058 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:59.920118 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:01.920524 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:01.459281 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:03.460348 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:01.892908 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:04.392588 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:06.393093 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:05.272559 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:07.273883 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:04.421056 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:06.931053 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:05.960254 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:08.457727 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:10.459842 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:08.394141 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:10.892223 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:09.772505 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:11.772971 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:09.422626 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:11.423328 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:13.424365 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:12.958612 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:14.965490 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:12.893418 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:15.394472 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:14.272688 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:16.273685 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:15.919394 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:17.923047 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:17.460160 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:19.958439 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:17.894003 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:19.894407 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:18.772990 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:21.272821 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:23.273740 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:20.427751 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:22.920375 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:21.959239 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:23.959721 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:22.392669 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:24.392858 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:26.392896 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:25.773792 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:28.272610 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:25.423969 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:27.920156 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:25.960648 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:28.460460 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:28.393135 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:30.892597 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:30.273479 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:32.772964 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:29.920769 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:31.921078 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:30.959214 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:33.459431 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:32.892662 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:34.893997 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:35.271152 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:37.273194 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:34.423090 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:36.920078 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:35.960397 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:38.458322 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:40.459780 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:37.393337 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:39.394287 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:39.772604 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:42.273098 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:39.421175 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:41.422356 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:43.920740 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:42.959038 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:45.461396 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:41.891807 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:43.892286 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:45.894698 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:44.772741 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:46.774412 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:46.424856 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:48.425180 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:47.959378 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:49.960002 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:48.392683 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:50.393690 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:49.275313 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:51.773822 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:50.919701 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:52.919921 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:52.459957 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:54.958709 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:52.894991 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:55.392555 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:54.273372 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:56.775369 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:54.920834 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:56.921032 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:57.458730 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.460912 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:57.393828 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.892700 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.272482 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.774098 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.429623 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.920129 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:03.920308 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.958119 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:03.958450 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.894130 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:03.894522 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:05.895253 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:04.273903 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:06.773689 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:06.424487 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.427374 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:05.961652 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.457716 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:10.458998 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.392784 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:10.393957 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.774235 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:11.272040 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:13.273524 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:10.920257 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:12.921203 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:12.459321 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:14.460373 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:12.893440 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:15.392849 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:15.774096 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:18.274263 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:15.421911 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:17.922223 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:16.461304 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:18.958236 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:17.393857 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:19.893380 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:20.274441 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:22.773139 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:20.426046 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:22.919646 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:20.959049 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:23.460465 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:22.392918 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:24.892470 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:25.273192 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:27.273498 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:24.919892 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:26.921648 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:25.961037 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:28.458547 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:26.893611 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:29.393411 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:31.393789 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:29.771999 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:31.772639 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:29.419744 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:31.420846 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:33.422484 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:30.958391 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:33.457895 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:35.459845 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:33.893731 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:36.393503 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:34.272758 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:36.275172 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:35.920446 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:37.922565 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:37.460196 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:39.957808 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:38.394837 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:40.900948 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:38.772728 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:40.773003 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:43.273981 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:40.421480 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:42.919369 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:42.458683 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:44.458762 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:43.392899 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:45.893528 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:45.774587 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:48.273073 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:45.422093 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:47.429470 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:46.958556 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:49.457855 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:47.895376 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:50.392344 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:50.771704 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:52.772560 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:49.918779 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:51.919087 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:51.463426 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:53.957695 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:52.894219 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:54.894786 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:55.273619 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:57.775426 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:54.421093 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:56.424484 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:58.921289 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:55.959421 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:57.960287 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:00.460659 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:57.393604 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:59.394180 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:00.272948 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:02.274904 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:01.421007 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:03.422071 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:02.965138 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:05.458181 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:01.891831 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:03.892978 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:05.895017 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:04.772127 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:07.274312 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:05.920564 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:08.420835 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:07.459555 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:09.460645 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:08.392743 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:10.892887 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:09.772353 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:11.772877 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:12.368174 1102136 pod_ready.go:81] duration metric: took 4m0.000660307s waiting for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	E0717 20:03:12.368224 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:03:12.368251 1102136 pod_ready.go:38] duration metric: took 4m3.60522468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:03:12.368299 1102136 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:03:12.368343 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:12.368422 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:12.425640 1102136 cri.go:89] found id: "eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:12.425667 1102136 cri.go:89] found id: ""
	I0717 20:03:12.425684 1102136 logs.go:284] 1 containers: [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3]
	I0717 20:03:12.425749 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.430857 1102136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:12.430926 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:12.464958 1102136 cri.go:89] found id: "4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:12.464987 1102136 cri.go:89] found id: ""
	I0717 20:03:12.464996 1102136 logs.go:284] 1 containers: [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc]
	I0717 20:03:12.465063 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.470768 1102136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:12.470865 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:12.509622 1102136 cri.go:89] found id: "63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:12.509655 1102136 cri.go:89] found id: ""
	I0717 20:03:12.509665 1102136 logs.go:284] 1 containers: [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce]
	I0717 20:03:12.509718 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.514266 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:12.514346 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:12.556681 1102136 cri.go:89] found id: "0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:12.556705 1102136 cri.go:89] found id: ""
	I0717 20:03:12.556713 1102136 logs.go:284] 1 containers: [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5]
	I0717 20:03:12.556779 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.561653 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:12.561749 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:12.595499 1102136 cri.go:89] found id: "c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:12.595527 1102136 cri.go:89] found id: ""
	I0717 20:03:12.595537 1102136 logs.go:284] 1 containers: [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a]
	I0717 20:03:12.595603 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.600644 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:12.600728 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:12.635293 1102136 cri.go:89] found id: "2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:12.635327 1102136 cri.go:89] found id: ""
	I0717 20:03:12.635341 1102136 logs.go:284] 1 containers: [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9]
	I0717 20:03:12.635409 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.640445 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:12.640612 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:12.679701 1102136 cri.go:89] found id: ""
	I0717 20:03:12.679738 1102136 logs.go:284] 0 containers: []
	W0717 20:03:12.679748 1102136 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:12.679755 1102136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:12.679817 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:12.711772 1102136 cri.go:89] found id: "434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:12.711815 1102136 cri.go:89] found id: "cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:12.711822 1102136 cri.go:89] found id: ""
	I0717 20:03:12.711833 1102136 logs.go:284] 2 containers: [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379]
	I0717 20:03:12.711904 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.716354 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.720769 1102136 logs.go:123] Gathering logs for kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] ...
	I0717 20:03:12.720806 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:12.757719 1102136 logs.go:123] Gathering logs for kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] ...
	I0717 20:03:12.757766 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:12.804972 1102136 logs.go:123] Gathering logs for coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] ...
	I0717 20:03:12.805019 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:12.841021 1102136 logs.go:123] Gathering logs for kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] ...
	I0717 20:03:12.841055 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:12.890140 1102136 logs.go:123] Gathering logs for storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] ...
	I0717 20:03:12.890185 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:12.926177 1102136 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:12.926219 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:12.985838 1102136 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:12.985904 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:13.003223 1102136 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:13.003257 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:13.180312 1102136 logs.go:123] Gathering logs for kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] ...
	I0717 20:03:13.180361 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:13.234663 1102136 logs.go:123] Gathering logs for etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] ...
	I0717 20:03:13.234711 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:13.297008 1102136 logs.go:123] Gathering logs for storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] ...
	I0717 20:03:13.297065 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:13.335076 1102136 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:13.335110 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:10.919208 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:12.921588 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:11.958471 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:13.959630 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:12.893125 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:15.392702 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:13.901775 1102136 logs.go:123] Gathering logs for container status ...
	I0717 20:03:13.901828 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:16.451075 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:03:16.470892 1102136 api_server.go:72] duration metric: took 4m15.23519157s to wait for apiserver process to appear ...
	I0717 20:03:16.470922 1102136 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:03:16.470963 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:16.471033 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:16.515122 1102136 cri.go:89] found id: "eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:16.515151 1102136 cri.go:89] found id: ""
	I0717 20:03:16.515161 1102136 logs.go:284] 1 containers: [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3]
	I0717 20:03:16.515217 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.519734 1102136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:16.519828 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:16.552440 1102136 cri.go:89] found id: "4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:16.552491 1102136 cri.go:89] found id: ""
	I0717 20:03:16.552503 1102136 logs.go:284] 1 containers: [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc]
	I0717 20:03:16.552569 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.557827 1102136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:16.557935 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:16.598317 1102136 cri.go:89] found id: "63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:16.598344 1102136 cri.go:89] found id: ""
	I0717 20:03:16.598354 1102136 logs.go:284] 1 containers: [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce]
	I0717 20:03:16.598425 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.604234 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:16.604331 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:16.638321 1102136 cri.go:89] found id: "0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:16.638349 1102136 cri.go:89] found id: ""
	I0717 20:03:16.638360 1102136 logs.go:284] 1 containers: [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5]
	I0717 20:03:16.638429 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.642755 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:16.642840 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:16.681726 1102136 cri.go:89] found id: "c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:16.681763 1102136 cri.go:89] found id: ""
	I0717 20:03:16.681776 1102136 logs.go:284] 1 containers: [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a]
	I0717 20:03:16.681848 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.686317 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:16.686394 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:16.723303 1102136 cri.go:89] found id: "2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:16.723328 1102136 cri.go:89] found id: ""
	I0717 20:03:16.723337 1102136 logs.go:284] 1 containers: [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9]
	I0717 20:03:16.723387 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.727491 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:16.727586 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:16.756931 1102136 cri.go:89] found id: ""
	I0717 20:03:16.756960 1102136 logs.go:284] 0 containers: []
	W0717 20:03:16.756968 1102136 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:16.756975 1102136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:16.757036 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:16.788732 1102136 cri.go:89] found id: "434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:16.788819 1102136 cri.go:89] found id: "cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:16.788832 1102136 cri.go:89] found id: ""
	I0717 20:03:16.788845 1102136 logs.go:284] 2 containers: [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379]
	I0717 20:03:16.788913 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.793783 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.797868 1102136 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:16.797892 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:16.813545 1102136 logs.go:123] Gathering logs for etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] ...
	I0717 20:03:16.813603 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:16.865094 1102136 logs.go:123] Gathering logs for coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] ...
	I0717 20:03:16.865144 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:16.904821 1102136 logs.go:123] Gathering logs for kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] ...
	I0717 20:03:16.904869 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:16.945822 1102136 logs.go:123] Gathering logs for kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] ...
	I0717 20:03:16.945865 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:16.986531 1102136 logs.go:123] Gathering logs for storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] ...
	I0717 20:03:16.986580 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:17.023216 1102136 logs.go:123] Gathering logs for storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] ...
	I0717 20:03:17.023253 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:17.062491 1102136 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:17.062532 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:17.137024 1102136 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:17.137085 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:17.292825 1102136 logs.go:123] Gathering logs for kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] ...
	I0717 20:03:17.292881 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:17.345470 1102136 logs.go:123] Gathering logs for kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] ...
	I0717 20:03:17.345519 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:17.401262 1102136 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:17.401326 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:18.037384 1102136 logs.go:123] Gathering logs for container status ...
	I0717 20:03:18.037440 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:15.422242 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:17.011882 1102415 pod_ready.go:81] duration metric: took 4m0.000519116s waiting for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	E0717 20:03:17.011940 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:03:17.011951 1102415 pod_ready.go:38] duration metric: took 4m2.40035739s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:03:17.011974 1102415 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:03:17.012009 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:17.012082 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:17.072352 1102415 cri.go:89] found id: "210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:17.072381 1102415 cri.go:89] found id: ""
	I0717 20:03:17.072396 1102415 logs.go:284] 1 containers: [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a]
	I0717 20:03:17.072467 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.078353 1102415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:17.078432 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:17.122416 1102415 cri.go:89] found id: "bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:17.122455 1102415 cri.go:89] found id: ""
	I0717 20:03:17.122466 1102415 logs.go:284] 1 containers: [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2]
	I0717 20:03:17.122539 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.128311 1102415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:17.128394 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:17.166606 1102415 cri.go:89] found id: "cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:17.166637 1102415 cri.go:89] found id: ""
	I0717 20:03:17.166653 1102415 logs.go:284] 1 containers: [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524]
	I0717 20:03:17.166720 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.172605 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:17.172693 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:17.221109 1102415 cri.go:89] found id: "9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:17.221138 1102415 cri.go:89] found id: ""
	I0717 20:03:17.221149 1102415 logs.go:284] 1 containers: [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261]
	I0717 20:03:17.221216 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.226305 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:17.226394 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:17.271876 1102415 cri.go:89] found id: "76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:17.271902 1102415 cri.go:89] found id: ""
	I0717 20:03:17.271911 1102415 logs.go:284] 1 containers: [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951]
	I0717 20:03:17.271979 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.281914 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:17.282016 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:17.319258 1102415 cri.go:89] found id: "280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:17.319288 1102415 cri.go:89] found id: ""
	I0717 20:03:17.319309 1102415 logs.go:284] 1 containers: [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6]
	I0717 20:03:17.319376 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.323955 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:17.324102 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:17.357316 1102415 cri.go:89] found id: ""
	I0717 20:03:17.357355 1102415 logs.go:284] 0 containers: []
	W0717 20:03:17.357367 1102415 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:17.357375 1102415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:17.357458 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:17.409455 1102415 cri.go:89] found id: "19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:17.409553 1102415 cri.go:89] found id: "4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:17.409613 1102415 cri.go:89] found id: ""
	I0717 20:03:17.409626 1102415 logs.go:284] 2 containers: [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88]
	I0717 20:03:17.409706 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.417046 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.428187 1102415 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:17.428242 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:17.504409 1102415 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:17.504454 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:17.673502 1102415 logs.go:123] Gathering logs for etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] ...
	I0717 20:03:17.673576 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:17.728765 1102415 logs.go:123] Gathering logs for kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] ...
	I0717 20:03:17.728818 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:17.791192 1102415 logs.go:123] Gathering logs for container status ...
	I0717 20:03:17.791249 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:17.844883 1102415 logs.go:123] Gathering logs for storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] ...
	I0717 20:03:17.844944 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:17.891456 1102415 logs.go:123] Gathering logs for storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] ...
	I0717 20:03:17.891501 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:17.927018 1102415 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:17.927057 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:18.493310 1102415 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:18.493362 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:18.510255 1102415 logs.go:123] Gathering logs for kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] ...
	I0717 20:03:18.510302 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:18.558006 1102415 logs.go:123] Gathering logs for coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] ...
	I0717 20:03:18.558054 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:18.595130 1102415 logs.go:123] Gathering logs for kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] ...
	I0717 20:03:18.595166 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:18.636909 1102415 logs.go:123] Gathering logs for kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] ...
	I0717 20:03:18.636967 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:16.460091 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:18.959764 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:17.395341 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:19.891916 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:20.585703 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 20:03:20.591606 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 200:
	ok
	I0717 20:03:20.593225 1102136 api_server.go:141] control plane version: v1.27.3
	I0717 20:03:20.593249 1102136 api_server.go:131] duration metric: took 4.122320377s to wait for apiserver health ...
	I0717 20:03:20.593259 1102136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:03:20.593297 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:20.593391 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:20.636361 1102136 cri.go:89] found id: "eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:20.636401 1102136 cri.go:89] found id: ""
	I0717 20:03:20.636413 1102136 logs.go:284] 1 containers: [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3]
	I0717 20:03:20.636488 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.641480 1102136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:20.641622 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:20.674769 1102136 cri.go:89] found id: "4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:20.674791 1102136 cri.go:89] found id: ""
	I0717 20:03:20.674799 1102136 logs.go:284] 1 containers: [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc]
	I0717 20:03:20.674852 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.679515 1102136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:20.679587 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:20.717867 1102136 cri.go:89] found id: "63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:20.717914 1102136 cri.go:89] found id: ""
	I0717 20:03:20.717927 1102136 logs.go:284] 1 containers: [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce]
	I0717 20:03:20.717997 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.723020 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:20.723106 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:20.759930 1102136 cri.go:89] found id: "0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:20.759957 1102136 cri.go:89] found id: ""
	I0717 20:03:20.759968 1102136 logs.go:284] 1 containers: [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5]
	I0717 20:03:20.760032 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.764308 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:20.764378 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:20.804542 1102136 cri.go:89] found id: "c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:20.804570 1102136 cri.go:89] found id: ""
	I0717 20:03:20.804580 1102136 logs.go:284] 1 containers: [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a]
	I0717 20:03:20.804654 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.810036 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:20.810133 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:20.846655 1102136 cri.go:89] found id: "2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:20.846681 1102136 cri.go:89] found id: ""
	I0717 20:03:20.846689 1102136 logs.go:284] 1 containers: [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9]
	I0717 20:03:20.846745 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.853633 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:20.853741 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:20.886359 1102136 cri.go:89] found id: ""
	I0717 20:03:20.886393 1102136 logs.go:284] 0 containers: []
	W0717 20:03:20.886405 1102136 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:20.886413 1102136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:20.886489 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:20.924476 1102136 cri.go:89] found id: "434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:20.924508 1102136 cri.go:89] found id: "cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:20.924513 1102136 cri.go:89] found id: ""
	I0717 20:03:20.924524 1102136 logs.go:284] 2 containers: [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379]
	I0717 20:03:20.924576 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.929775 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.935520 1102136 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:20.935547 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:21.543605 1102136 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:21.543668 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:21.694696 1102136 logs.go:123] Gathering logs for kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] ...
	I0717 20:03:21.694763 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:21.736092 1102136 logs.go:123] Gathering logs for kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] ...
	I0717 20:03:21.736150 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:21.771701 1102136 logs.go:123] Gathering logs for kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] ...
	I0717 20:03:21.771749 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:21.822783 1102136 logs.go:123] Gathering logs for storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] ...
	I0717 20:03:21.822835 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:21.885797 1102136 logs.go:123] Gathering logs for storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] ...
	I0717 20:03:21.885851 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:21.930801 1102136 logs.go:123] Gathering logs for container status ...
	I0717 20:03:21.930842 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:21.985829 1102136 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:21.985862 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:22.056958 1102136 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:22.057010 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:22.074352 1102136 logs.go:123] Gathering logs for kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] ...
	I0717 20:03:22.074402 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:22.128386 1102136 logs.go:123] Gathering logs for etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] ...
	I0717 20:03:22.128437 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:22.188390 1102136 logs.go:123] Gathering logs for coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] ...
	I0717 20:03:22.188425 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:21.172413 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:03:21.194614 1102415 api_server.go:72] duration metric: took 4m13.166163785s to wait for apiserver process to appear ...
	I0717 20:03:21.194645 1102415 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:03:21.194687 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:21.194748 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:21.229142 1102415 cri.go:89] found id: "210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:21.229176 1102415 cri.go:89] found id: ""
	I0717 20:03:21.229186 1102415 logs.go:284] 1 containers: [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a]
	I0717 20:03:21.229255 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.234039 1102415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:21.234106 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:21.266482 1102415 cri.go:89] found id: "bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:21.266516 1102415 cri.go:89] found id: ""
	I0717 20:03:21.266527 1102415 logs.go:284] 1 containers: [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2]
	I0717 20:03:21.266596 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.271909 1102415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:21.271992 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:21.309830 1102415 cri.go:89] found id: "cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:21.309869 1102415 cri.go:89] found id: ""
	I0717 20:03:21.309878 1102415 logs.go:284] 1 containers: [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524]
	I0717 20:03:21.309943 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.314757 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:21.314838 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:21.356650 1102415 cri.go:89] found id: "9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:21.356681 1102415 cri.go:89] found id: ""
	I0717 20:03:21.356691 1102415 logs.go:284] 1 containers: [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261]
	I0717 20:03:21.356748 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.361582 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:21.361667 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:21.394956 1102415 cri.go:89] found id: "76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:21.394982 1102415 cri.go:89] found id: ""
	I0717 20:03:21.394994 1102415 logs.go:284] 1 containers: [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951]
	I0717 20:03:21.395056 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.400073 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:21.400143 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:21.441971 1102415 cri.go:89] found id: "280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:21.442004 1102415 cri.go:89] found id: ""
	I0717 20:03:21.442015 1102415 logs.go:284] 1 containers: [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6]
	I0717 20:03:21.442083 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.447189 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:21.447253 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:21.479477 1102415 cri.go:89] found id: ""
	I0717 20:03:21.479512 1102415 logs.go:284] 0 containers: []
	W0717 20:03:21.479524 1102415 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:21.479534 1102415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:21.479615 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:21.515474 1102415 cri.go:89] found id: "19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:21.515502 1102415 cri.go:89] found id: "4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:21.515510 1102415 cri.go:89] found id: ""
	I0717 20:03:21.515521 1102415 logs.go:284] 2 containers: [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88]
	I0717 20:03:21.515583 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.520398 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.525414 1102415 logs.go:123] Gathering logs for kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] ...
	I0717 20:03:21.525450 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:21.564455 1102415 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:21.564492 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:21.628081 1102415 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:21.628127 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:21.646464 1102415 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:21.646508 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:21.803148 1102415 logs.go:123] Gathering logs for kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] ...
	I0717 20:03:21.803205 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:21.856704 1102415 logs.go:123] Gathering logs for etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] ...
	I0717 20:03:21.856765 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:21.907860 1102415 logs.go:123] Gathering logs for coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] ...
	I0717 20:03:21.907912 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:21.953111 1102415 logs.go:123] Gathering logs for kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] ...
	I0717 20:03:21.953158 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:21.999947 1102415 logs.go:123] Gathering logs for kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] ...
	I0717 20:03:22.000008 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:22.061041 1102415 logs.go:123] Gathering logs for storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] ...
	I0717 20:03:22.061078 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:22.103398 1102415 logs.go:123] Gathering logs for storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] ...
	I0717 20:03:22.103432 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:22.141810 1102415 logs.go:123] Gathering logs for container status ...
	I0717 20:03:22.141864 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:22.186692 1102415 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:22.186726 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:24.737179 1102136 system_pods.go:59] 8 kube-system pods found
	I0717 20:03:24.737218 1102136 system_pods.go:61] "coredns-5d78c9869d-9mxdj" [fbff09fd-436d-4208-9187-b6312aa1c223] Running
	I0717 20:03:24.737225 1102136 system_pods.go:61] "etcd-no-preload-408472" [7125f19d-c1ed-4b1f-99be-207bbf5d8c70] Running
	I0717 20:03:24.737231 1102136 system_pods.go:61] "kube-apiserver-no-preload-408472" [0e54eaed-daad-434a-ad3b-96f7fb924099] Running
	I0717 20:03:24.737238 1102136 system_pods.go:61] "kube-controller-manager-no-preload-408472" [38ee8079-8142-45a9-8d5a-4abbc5c8bb3b] Running
	I0717 20:03:24.737243 1102136 system_pods.go:61] "kube-proxy-cntdn" [8653567b-abf9-468c-a030-45fc53fa0cc2] Running
	I0717 20:03:24.737248 1102136 system_pods.go:61] "kube-scheduler-no-preload-408472" [e51560a1-c1b0-407c-8635-df512bd033b5] Running
	I0717 20:03:24.737258 1102136 system_pods.go:61] "metrics-server-74d5c6b9c-hnngh" [dfff837e-dbba-4795-935d-9562d2744169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:24.737269 1102136 system_pods.go:61] "storage-provisioner" [1aefd8ef-dec9-4e37-8648-8e5a62622cd3] Running
	I0717 20:03:24.737278 1102136 system_pods.go:74] duration metric: took 4.144012317s to wait for pod list to return data ...
	I0717 20:03:24.737290 1102136 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:03:24.741216 1102136 default_sa.go:45] found service account: "default"
	I0717 20:03:24.741262 1102136 default_sa.go:55] duration metric: took 3.961044ms for default service account to be created ...
	I0717 20:03:24.741275 1102136 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:03:24.749060 1102136 system_pods.go:86] 8 kube-system pods found
	I0717 20:03:24.749094 1102136 system_pods.go:89] "coredns-5d78c9869d-9mxdj" [fbff09fd-436d-4208-9187-b6312aa1c223] Running
	I0717 20:03:24.749100 1102136 system_pods.go:89] "etcd-no-preload-408472" [7125f19d-c1ed-4b1f-99be-207bbf5d8c70] Running
	I0717 20:03:24.749104 1102136 system_pods.go:89] "kube-apiserver-no-preload-408472" [0e54eaed-daad-434a-ad3b-96f7fb924099] Running
	I0717 20:03:24.749109 1102136 system_pods.go:89] "kube-controller-manager-no-preload-408472" [38ee8079-8142-45a9-8d5a-4abbc5c8bb3b] Running
	I0717 20:03:24.749113 1102136 system_pods.go:89] "kube-proxy-cntdn" [8653567b-abf9-468c-a030-45fc53fa0cc2] Running
	I0717 20:03:24.749117 1102136 system_pods.go:89] "kube-scheduler-no-preload-408472" [e51560a1-c1b0-407c-8635-df512bd033b5] Running
	I0717 20:03:24.749125 1102136 system_pods.go:89] "metrics-server-74d5c6b9c-hnngh" [dfff837e-dbba-4795-935d-9562d2744169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:24.749139 1102136 system_pods.go:89] "storage-provisioner" [1aefd8ef-dec9-4e37-8648-8e5a62622cd3] Running
	I0717 20:03:24.749147 1102136 system_pods.go:126] duration metric: took 7.865246ms to wait for k8s-apps to be running ...
	I0717 20:03:24.749155 1102136 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:03:24.749215 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:03:24.765460 1102136 system_svc.go:56] duration metric: took 16.294048ms WaitForService to wait for kubelet.
	I0717 20:03:24.765503 1102136 kubeadm.go:581] duration metric: took 4m23.529814054s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:03:24.765587 1102136 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:03:24.769332 1102136 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:03:24.769368 1102136 node_conditions.go:123] node cpu capacity is 2
	I0717 20:03:24.769381 1102136 node_conditions.go:105] duration metric: took 3.788611ms to run NodePressure ...
	I0717 20:03:24.769392 1102136 start.go:228] waiting for startup goroutines ...
	I0717 20:03:24.769397 1102136 start.go:233] waiting for cluster config update ...
	I0717 20:03:24.769408 1102136 start.go:242] writing updated cluster config ...
	I0717 20:03:24.769830 1102136 ssh_runner.go:195] Run: rm -f paused
	I0717 20:03:24.827845 1102136 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:03:24.830624 1102136 out.go:177] * Done! kubectl is now configured to use "no-preload-408472" cluster and "default" namespace by default
	I0717 20:03:20.960575 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:23.458710 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:25.465429 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:21.893446 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:24.393335 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:26.393858 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:25.243410 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 20:03:25.250670 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 200:
	ok
	I0717 20:03:25.252086 1102415 api_server.go:141] control plane version: v1.27.3
	I0717 20:03:25.252111 1102415 api_server.go:131] duration metric: took 4.0574608s to wait for apiserver health ...
	I0717 20:03:25.252121 1102415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:03:25.252146 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:25.252197 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:25.286754 1102415 cri.go:89] found id: "210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:25.286785 1102415 cri.go:89] found id: ""
	I0717 20:03:25.286795 1102415 logs.go:284] 1 containers: [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a]
	I0717 20:03:25.286867 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.292653 1102415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:25.292733 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:25.328064 1102415 cri.go:89] found id: "bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:25.328092 1102415 cri.go:89] found id: ""
	I0717 20:03:25.328101 1102415 logs.go:284] 1 containers: [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2]
	I0717 20:03:25.328170 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.333727 1102415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:25.333798 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:25.368132 1102415 cri.go:89] found id: "cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:25.368159 1102415 cri.go:89] found id: ""
	I0717 20:03:25.368167 1102415 logs.go:284] 1 containers: [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524]
	I0717 20:03:25.368245 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.373091 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:25.373197 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:25.414136 1102415 cri.go:89] found id: "9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:25.414165 1102415 cri.go:89] found id: ""
	I0717 20:03:25.414175 1102415 logs.go:284] 1 containers: [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261]
	I0717 20:03:25.414229 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.424603 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:25.424679 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:25.470289 1102415 cri.go:89] found id: "76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:25.470320 1102415 cri.go:89] found id: ""
	I0717 20:03:25.470331 1102415 logs.go:284] 1 containers: [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951]
	I0717 20:03:25.470401 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.476760 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:25.476851 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:25.511350 1102415 cri.go:89] found id: "280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:25.511379 1102415 cri.go:89] found id: ""
	I0717 20:03:25.511390 1102415 logs.go:284] 1 containers: [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6]
	I0717 20:03:25.511459 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.516259 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:25.516339 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:25.553868 1102415 cri.go:89] found id: ""
	I0717 20:03:25.553913 1102415 logs.go:284] 0 containers: []
	W0717 20:03:25.553925 1102415 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:25.553932 1102415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:25.554025 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:25.589810 1102415 cri.go:89] found id: "19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:25.589844 1102415 cri.go:89] found id: "4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:25.589851 1102415 cri.go:89] found id: ""
	I0717 20:03:25.589862 1102415 logs.go:284] 2 containers: [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88]
	I0717 20:03:25.589924 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.594968 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.598953 1102415 logs.go:123] Gathering logs for kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] ...
	I0717 20:03:25.598977 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:25.640632 1102415 logs.go:123] Gathering logs for kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] ...
	I0717 20:03:25.640678 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:25.692768 1102415 logs.go:123] Gathering logs for storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] ...
	I0717 20:03:25.692812 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:25.728461 1102415 logs.go:123] Gathering logs for container status ...
	I0717 20:03:25.728500 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:25.779239 1102415 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:25.779278 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:25.794738 1102415 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:25.794790 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:25.966972 1102415 logs.go:123] Gathering logs for etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] ...
	I0717 20:03:25.967016 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:26.017430 1102415 logs.go:123] Gathering logs for coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] ...
	I0717 20:03:26.017467 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:26.053983 1102415 logs.go:123] Gathering logs for kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] ...
	I0717 20:03:26.054017 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:26.092510 1102415 logs.go:123] Gathering logs for storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] ...
	I0717 20:03:26.092544 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:26.127038 1102415 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:26.127071 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:26.728858 1102415 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:26.728911 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:26.792099 1102415 logs.go:123] Gathering logs for kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] ...
	I0717 20:03:26.792146 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:29.360633 1102415 system_pods.go:59] 8 kube-system pods found
	I0717 20:03:29.360678 1102415 system_pods.go:61] "coredns-5d78c9869d-rjqsv" [f27e2de9-9849-40e2-b6dc-1ee27537b1e6] Running
	I0717 20:03:29.360686 1102415 system_pods.go:61] "etcd-default-k8s-diff-port-711413" [15f74e3f-61a1-4464-bbff-d336f6df4b6e] Running
	I0717 20:03:29.360694 1102415 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-711413" [c164ab32-6a1d-4079-9b58-7da96eabc60e] Running
	I0717 20:03:29.360701 1102415 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-711413" [7dc149c8-be03-4fd6-b945-c63e95a28470] Running
	I0717 20:03:29.360708 1102415 system_pods.go:61] "kube-proxy-9qfpg" [ecb84bb9-57a2-4a42-8104-b792d38479ca] Running
	I0717 20:03:29.360714 1102415 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-711413" [f2cb374f-674e-4a08-82e0-0932b732b485] Running
	I0717 20:03:29.360727 1102415 system_pods.go:61] "metrics-server-74d5c6b9c-hzcd7" [17e01399-9910-4f01-abe7-3eae271af1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:29.360745 1102415 system_pods.go:61] "storage-provisioner" [43705714-97f1-4b06-8eeb-04d60c22112a] Running
	I0717 20:03:29.360755 1102415 system_pods.go:74] duration metric: took 4.108627852s to wait for pod list to return data ...
	I0717 20:03:29.360764 1102415 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:03:29.364887 1102415 default_sa.go:45] found service account: "default"
	I0717 20:03:29.364918 1102415 default_sa.go:55] duration metric: took 4.142278ms for default service account to be created ...
	I0717 20:03:29.364927 1102415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:03:29.372734 1102415 system_pods.go:86] 8 kube-system pods found
	I0717 20:03:29.372774 1102415 system_pods.go:89] "coredns-5d78c9869d-rjqsv" [f27e2de9-9849-40e2-b6dc-1ee27537b1e6] Running
	I0717 20:03:29.372783 1102415 system_pods.go:89] "etcd-default-k8s-diff-port-711413" [15f74e3f-61a1-4464-bbff-d336f6df4b6e] Running
	I0717 20:03:29.372791 1102415 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-711413" [c164ab32-6a1d-4079-9b58-7da96eabc60e] Running
	I0717 20:03:29.372799 1102415 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-711413" [7dc149c8-be03-4fd6-b945-c63e95a28470] Running
	I0717 20:03:29.372806 1102415 system_pods.go:89] "kube-proxy-9qfpg" [ecb84bb9-57a2-4a42-8104-b792d38479ca] Running
	I0717 20:03:29.372813 1102415 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-711413" [f2cb374f-674e-4a08-82e0-0932b732b485] Running
	I0717 20:03:29.372824 1102415 system_pods.go:89] "metrics-server-74d5c6b9c-hzcd7" [17e01399-9910-4f01-abe7-3eae271af1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:29.372832 1102415 system_pods.go:89] "storage-provisioner" [43705714-97f1-4b06-8eeb-04d60c22112a] Running
	I0717 20:03:29.372843 1102415 system_pods.go:126] duration metric: took 7.908204ms to wait for k8s-apps to be running ...
	I0717 20:03:29.372857 1102415 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:03:29.372916 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:03:29.393783 1102415 system_svc.go:56] duration metric: took 20.914205ms WaitForService to wait for kubelet.
	I0717 20:03:29.393821 1102415 kubeadm.go:581] duration metric: took 4m21.365424408s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:03:29.393853 1102415 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:03:29.398018 1102415 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:03:29.398052 1102415 node_conditions.go:123] node cpu capacity is 2
	I0717 20:03:29.398064 1102415 node_conditions.go:105] duration metric: took 4.205596ms to run NodePressure ...
	I0717 20:03:29.398076 1102415 start.go:228] waiting for startup goroutines ...
	I0717 20:03:29.398082 1102415 start.go:233] waiting for cluster config update ...
	I0717 20:03:29.398102 1102415 start.go:242] writing updated cluster config ...
	I0717 20:03:29.398468 1102415 ssh_runner.go:195] Run: rm -f paused
	I0717 20:03:29.454497 1102415 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:03:29.457512 1102415 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-711413" cluster and "default" namespace by default
	I0717 20:03:27.959261 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:30.460004 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:28.394465 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:30.892361 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:32.957801 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:34.958305 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:32.892903 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:35.392748 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:36.958526 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:38.958779 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:37.393705 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:39.892551 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:41.458525 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:42.402712 1103141 pod_ready.go:81] duration metric: took 4m0.00015085s waiting for pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace to be "Ready" ...
	E0717 20:03:42.402748 1103141 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:03:42.402774 1103141 pod_ready.go:38] duration metric: took 4m10.235484044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:03:42.402819 1103141 kubeadm.go:640] restartCluster took 4m30.682189828s
	W0717 20:03:42.402887 1103141 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 20:03:42.402946 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 20:03:42.393799 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:44.394199 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:46.892897 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:48.895295 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:51.394267 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:53.894027 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:56.393652 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:58.896895 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:01.393396 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:03.892923 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:05.894423 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:08.394591 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:10.893136 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:14.851948 1103141 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.44897498s)
	I0717 20:04:14.852044 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:04:14.868887 1103141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:04:14.879707 1103141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:04:14.890657 1103141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:04:14.890724 1103141 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 20:04:14.961576 1103141 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 20:04:14.961661 1103141 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:04:15.128684 1103141 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:04:15.128835 1103141 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:04:15.128966 1103141 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:04:15.334042 1103141 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:04:15.336736 1103141 out.go:204]   - Generating certificates and keys ...
	I0717 20:04:15.336885 1103141 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:04:15.336966 1103141 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:04:15.337097 1103141 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 20:04:15.337201 1103141 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 20:04:15.337312 1103141 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 20:04:15.337393 1103141 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 20:04:15.337769 1103141 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 20:04:15.338490 1103141 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 20:04:15.338931 1103141 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 20:04:15.339490 1103141 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 20:04:15.339994 1103141 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 20:04:15.340076 1103141 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:04:15.714920 1103141 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:04:15.892169 1103141 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:04:16.203610 1103141 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:04:16.346085 1103141 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:04:16.364315 1103141 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:04:16.365521 1103141 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:04:16.366077 1103141 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 20:04:16.503053 1103141 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:04:13.393067 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:15.394199 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:16.505772 1103141 out.go:204]   - Booting up control plane ...
	I0717 20:04:16.505925 1103141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:04:16.506056 1103141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:04:16.511321 1103141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:04:16.513220 1103141 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:04:16.516069 1103141 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 20:04:17.892626 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:19.893760 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:25.520496 1103141 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003077 seconds
	I0717 20:04:25.520676 1103141 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 20:04:25.541790 1103141 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 20:04:26.093172 1103141 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 20:04:26.093446 1103141 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-114855 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 20:04:26.614680 1103141 kubeadm.go:322] [bootstrap-token] Using token: nbkipc.s1xu11jkn2pd9jvz
	I0717 20:04:22.393296 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:24.395001 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:26.617034 1103141 out.go:204]   - Configuring RBAC rules ...
	I0717 20:04:26.617210 1103141 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 20:04:26.625795 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 20:04:26.645311 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 20:04:26.650977 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 20:04:26.656523 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 20:04:26.662996 1103141 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 20:04:26.691726 1103141 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 20:04:26.969700 1103141 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 20:04:27.038459 1103141 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 20:04:27.039601 1103141 kubeadm.go:322] 
	I0717 20:04:27.039723 1103141 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 20:04:27.039753 1103141 kubeadm.go:322] 
	I0717 20:04:27.039848 1103141 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 20:04:27.039857 1103141 kubeadm.go:322] 
	I0717 20:04:27.039879 1103141 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 20:04:27.039945 1103141 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 20:04:27.040023 1103141 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 20:04:27.040036 1103141 kubeadm.go:322] 
	I0717 20:04:27.040114 1103141 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 20:04:27.040123 1103141 kubeadm.go:322] 
	I0717 20:04:27.040192 1103141 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 20:04:27.040202 1103141 kubeadm.go:322] 
	I0717 20:04:27.040302 1103141 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 20:04:27.040419 1103141 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 20:04:27.040533 1103141 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 20:04:27.040543 1103141 kubeadm.go:322] 
	I0717 20:04:27.040653 1103141 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 20:04:27.040780 1103141 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 20:04:27.040792 1103141 kubeadm.go:322] 
	I0717 20:04:27.040917 1103141 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nbkipc.s1xu11jkn2pd9jvz \
	I0717 20:04:27.041051 1103141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 20:04:27.041083 1103141 kubeadm.go:322] 	--control-plane 
	I0717 20:04:27.041093 1103141 kubeadm.go:322] 
	I0717 20:04:27.041196 1103141 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 20:04:27.041200 1103141 kubeadm.go:322] 
	I0717 20:04:27.041276 1103141 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nbkipc.s1xu11jkn2pd9jvz \
	I0717 20:04:27.041420 1103141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 20:04:27.042440 1103141 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 20:04:27.042466 1103141 cni.go:84] Creating CNI manager for ""
	I0717 20:04:27.042512 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 20:04:27.046805 1103141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 20:04:27.049084 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 20:04:27.115952 1103141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 20:04:27.155521 1103141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 20:04:27.155614 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:27.155620 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=embed-certs-114855 minikube.k8s.io/updated_at=2023_07_17T20_04_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:27.604520 1103141 ops.go:34] apiserver oom_adj: -16
	I0717 20:04:27.604687 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:28.204384 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:28.703799 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:29.203981 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:29.703475 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:30.204062 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:30.703323 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:26.892819 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:28.895201 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:31.393384 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:31.204070 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:31.704206 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:32.204069 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:32.704193 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:33.203936 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:33.703692 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:34.203584 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:34.704039 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:35.204118 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:35.703385 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:33.893262 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:33.985163 1101908 pod_ready.go:81] duration metric: took 4m0.000422638s waiting for pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace to be "Ready" ...
	E0717 20:04:33.985205 1101908 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:04:33.985241 1101908 pod_ready.go:38] duration metric: took 4m1.200649003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:04:33.985298 1101908 kubeadm.go:640] restartCluster took 4m55.488257482s
	W0717 20:04:33.985385 1101908 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 20:04:33.985432 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 20:04:36.203827 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:36.703377 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:37.203981 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:37.703376 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:38.203498 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:38.703751 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:39.204099 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:39.704172 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:39.830734 1103141 kubeadm.go:1081] duration metric: took 12.675193605s to wait for elevateKubeSystemPrivileges.
	I0717 20:04:39.830771 1103141 kubeadm.go:406] StartCluster complete in 5m28.184955104s
	I0717 20:04:39.830796 1103141 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:04:39.830918 1103141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:04:39.833157 1103141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:04:39.834602 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 20:04:39.834801 1103141 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:04:39.834815 1103141 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 20:04:39.835031 1103141 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-114855"
	I0717 20:04:39.835054 1103141 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-114855"
	W0717 20:04:39.835062 1103141 addons.go:240] addon storage-provisioner should already be in state true
	I0717 20:04:39.835120 1103141 host.go:66] Checking if "embed-certs-114855" exists ...
	I0717 20:04:39.835243 1103141 addons.go:69] Setting default-storageclass=true in profile "embed-certs-114855"
	I0717 20:04:39.835240 1103141 addons.go:69] Setting metrics-server=true in profile "embed-certs-114855"
	I0717 20:04:39.835265 1103141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-114855"
	I0717 20:04:39.835268 1103141 addons.go:231] Setting addon metrics-server=true in "embed-certs-114855"
	W0717 20:04:39.835277 1103141 addons.go:240] addon metrics-server should already be in state true
	I0717 20:04:39.835324 1103141 host.go:66] Checking if "embed-certs-114855" exists ...
	I0717 20:04:39.835732 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.835742 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.835801 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.835831 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.835799 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.835916 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.855470 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0717 20:04:39.855482 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35595
	I0717 20:04:39.855481 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0717 20:04:39.856035 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.856107 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.856127 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.856776 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.856802 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.856872 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.856886 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.856937 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.856967 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.857216 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.857328 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.857353 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.857979 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.858022 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.858249 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.858296 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.858559 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.868852 1103141 addons.go:231] Setting addon default-storageclass=true in "embed-certs-114855"
	W0717 20:04:39.868889 1103141 addons.go:240] addon default-storageclass should already be in state true
	I0717 20:04:39.868930 1103141 host.go:66] Checking if "embed-certs-114855" exists ...
	I0717 20:04:39.869376 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.869426 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.877028 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37179
	I0717 20:04:39.877916 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.878347 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
	I0717 20:04:39.878690 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.878713 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.879085 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.879732 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.879754 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.879765 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.879950 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.880175 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.880381 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.882729 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 20:04:39.885818 1103141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:04:39.883284 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 20:04:39.888145 1103141 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:04:39.888171 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 20:04:39.888202 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 20:04:39.891651 1103141 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 20:04:39.893769 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 20:04:39.893066 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.893799 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 20:04:39.893831 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 20:04:39.893840 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 20:04:39.893879 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.894206 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 20:04:39.894454 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 20:04:39.894689 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 20:04:39.894878 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 20:04:39.895562 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0717 20:04:39.896172 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.896799 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.896825 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.897316 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.897969 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.898007 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.898778 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.899616 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 20:04:39.899645 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.899895 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 20:04:39.900193 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 20:04:39.900575 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 20:04:39.900770 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 20:04:39.915966 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I0717 20:04:39.916539 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.917101 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.917123 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.917530 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.917816 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.919631 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 20:04:39.919916 1103141 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 20:04:39.919936 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 20:04:39.919957 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 20:04:39.926132 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.926487 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 20:04:39.926520 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.926779 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 20:04:39.927115 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 20:04:39.927327 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 20:04:39.927522 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 20:04:40.077079 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 20:04:40.077106 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 20:04:40.084344 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 20:04:40.114809 1103141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:04:40.123795 1103141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 20:04:40.149950 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 20:04:40.149977 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 20:04:40.222818 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:04:40.222855 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 20:04:40.290773 1103141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:04:40.464132 1103141 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-114855" context rescaled to 1 replicas
	I0717 20:04:40.464182 1103141 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:04:40.468285 1103141 out.go:177] * Verifying Kubernetes components...
	I0717 20:04:40.470824 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:04:42.565704 1103141 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.481305344s)
	I0717 20:04:42.565749 1103141 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 20:04:43.290667 1103141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.175803142s)
	I0717 20:04:43.290744 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.290759 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.290778 1103141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.166947219s)
	I0717 20:04:43.290822 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.290840 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.291087 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.291217 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291225 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291238 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291241 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291254 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.291261 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.291268 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.291272 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.291613 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.291662 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291671 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291732 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.291756 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291764 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291775 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.291784 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.292436 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.292456 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.292471 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.439222 1103141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.148389848s)
	I0717 20:04:43.439268 1103141 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.968393184s)
	I0717 20:04:43.439310 1103141 node_ready.go:35] waiting up to 6m0s for node "embed-certs-114855" to be "Ready" ...
	I0717 20:04:43.439357 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.439401 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.439784 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.439806 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.439863 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.439932 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.440202 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.440220 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.440226 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.440232 1103141 addons.go:467] Verifying addon metrics-server=true in "embed-certs-114855"
	I0717 20:04:43.443066 1103141 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 20:04:43.445240 1103141 addons.go:502] enable addons completed in 3.610419127s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 20:04:43.494952 1103141 node_ready.go:49] node "embed-certs-114855" has status "Ready":"True"
	I0717 20:04:43.495002 1103141 node_ready.go:38] duration metric: took 55.676022ms waiting for node "embed-certs-114855" to be "Ready" ...
	I0717 20:04:43.495017 1103141 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:04:43.579632 1103141 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-9dkzb" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.676633 1103141 pod_ready.go:92] pod "coredns-5d78c9869d-9dkzb" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.676664 1103141 pod_ready.go:81] duration metric: took 1.096981736s waiting for pod "coredns-5d78c9869d-9dkzb" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.676677 1103141 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-gq2b2" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.683019 1103141 pod_ready.go:92] pod "coredns-5d78c9869d-gq2b2" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.683061 1103141 pod_ready.go:81] duration metric: took 6.376086ms waiting for pod "coredns-5d78c9869d-gq2b2" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.683077 1103141 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.691140 1103141 pod_ready.go:92] pod "etcd-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.691166 1103141 pod_ready.go:81] duration metric: took 8.082867ms waiting for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.691180 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.713413 1103141 pod_ready.go:92] pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.713448 1103141 pod_ready.go:81] duration metric: took 22.261351ms waiting for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.713462 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.728761 1103141 pod_ready.go:92] pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.728797 1103141 pod_ready.go:81] duration metric: took 15.326363ms waiting for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.728813 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bfvnl" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.583863 1103141 pod_ready.go:92] pod "kube-proxy-bfvnl" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:45.583901 1103141 pod_ready.go:81] duration metric: took 855.078548ms waiting for pod "kube-proxy-bfvnl" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.583915 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.867684 1103141 pod_ready.go:92] pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:45.867719 1103141 pod_ready.go:81] duration metric: took 283.796193ms waiting for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.867735 1103141 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:48.274479 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:50.278380 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:52.775046 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:54.775545 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:56.776685 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:59.275966 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:57.110722 1101908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (23.125251743s)
	I0717 20:04:57.110813 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:04:57.124991 1101908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:04:57.136828 1101908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:04:57.146898 1101908 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:04:57.146965 1101908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0717 20:04:57.390116 1101908 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 20:05:01.281623 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:03.776009 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:10.335351 1101908 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 20:05:10.335447 1101908 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:05:10.335566 1101908 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:05:10.335703 1101908 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:05:10.335829 1101908 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:05:10.335949 1101908 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:05:10.336064 1101908 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:05:10.336135 1101908 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 20:05:10.336220 1101908 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:05:10.338257 1101908 out.go:204]   - Generating certificates and keys ...
	I0717 20:05:10.338354 1101908 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:05:10.338443 1101908 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:05:10.338558 1101908 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 20:05:10.338681 1101908 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 20:05:10.338792 1101908 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 20:05:10.338855 1101908 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 20:05:10.338950 1101908 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 20:05:10.339044 1101908 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 20:05:10.339160 1101908 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 20:05:10.339264 1101908 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 20:05:10.339326 1101908 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 20:05:10.339403 1101908 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:05:10.339477 1101908 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:05:10.339556 1101908 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:05:10.339650 1101908 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:05:10.339727 1101908 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:05:10.339820 1101908 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:05:10.341550 1101908 out.go:204]   - Booting up control plane ...
	I0717 20:05:10.341674 1101908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:05:10.341797 1101908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:05:10.341892 1101908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:05:10.341982 1101908 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:05:10.342180 1101908 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 20:05:10.342290 1101908 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005656 seconds
	I0717 20:05:10.342399 1101908 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 20:05:10.342515 1101908 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 20:05:10.342582 1101908 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 20:05:10.342742 1101908 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-149000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 20:05:10.342830 1101908 kubeadm.go:322] [bootstrap-token] Using token: ki6f1y.fknzxf03oj84iyat
	I0717 20:05:10.344845 1101908 out.go:204]   - Configuring RBAC rules ...
	I0717 20:05:10.344980 1101908 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 20:05:10.345153 1101908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 20:05:10.345318 1101908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 20:05:10.345473 1101908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 20:05:10.345600 1101908 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 20:05:10.345664 1101908 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 20:05:10.345739 1101908 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 20:05:10.345750 1101908 kubeadm.go:322] 
	I0717 20:05:10.345834 1101908 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 20:05:10.345843 1101908 kubeadm.go:322] 
	I0717 20:05:10.345939 1101908 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 20:05:10.345947 1101908 kubeadm.go:322] 
	I0717 20:05:10.345983 1101908 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 20:05:10.346067 1101908 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 20:05:10.346139 1101908 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 20:05:10.346148 1101908 kubeadm.go:322] 
	I0717 20:05:10.346248 1101908 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 20:05:10.346356 1101908 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 20:05:10.346470 1101908 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 20:05:10.346480 1101908 kubeadm.go:322] 
	I0717 20:05:10.346588 1101908 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0717 20:05:10.346686 1101908 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 20:05:10.346695 1101908 kubeadm.go:322] 
	I0717 20:05:10.346821 1101908 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ki6f1y.fknzxf03oj84iyat \
	I0717 20:05:10.346997 1101908 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 20:05:10.347033 1101908 kubeadm.go:322]     --control-plane 	  
	I0717 20:05:10.347042 1101908 kubeadm.go:322] 
	I0717 20:05:10.347152 1101908 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 20:05:10.347161 1101908 kubeadm.go:322] 
	I0717 20:05:10.347260 1101908 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ki6f1y.fknzxf03oj84iyat \
	I0717 20:05:10.347429 1101908 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 20:05:10.347449 1101908 cni.go:84] Creating CNI manager for ""
	I0717 20:05:10.347463 1101908 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 20:05:10.349875 1101908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 20:05:06.284772 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:08.777303 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:10.351592 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 20:05:10.370891 1101908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 20:05:10.395381 1101908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 20:05:10.395477 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=old-k8s-version-149000 minikube.k8s.io/updated_at=2023_07_17T20_05_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:10.395473 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:10.663627 1101908 ops.go:34] apiserver oom_adj: -16
	I0717 20:05:10.663730 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:11.311991 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:11.812120 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:11.275701 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:13.277070 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:12.312047 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:12.811579 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:13.311876 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:13.811911 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:14.311514 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:14.811938 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:15.312088 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:15.812089 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:16.312164 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:16.812065 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:15.776961 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:17.778204 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:20.275642 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:17.312322 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:17.811428 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:18.312070 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:18.812245 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:19.311363 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:19.811909 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:20.311343 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:20.811869 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:21.311974 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:21.811429 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:22.311474 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:22.811809 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:23.311574 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:23.812246 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:24.312115 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:24.812132 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:25.311694 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:25.457162 1101908 kubeadm.go:1081] duration metric: took 15.061765556s to wait for elevateKubeSystemPrivileges.
	I0717 20:05:25.457213 1101908 kubeadm.go:406] StartCluster complete in 5m47.004786394s
	I0717 20:05:25.457273 1101908 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:05:25.457431 1101908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:05:25.459593 1101908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:05:25.459942 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 20:05:25.460139 1101908 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 20:05:25.460267 1101908 config.go:182] Loaded profile config "old-k8s-version-149000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 20:05:25.460272 1101908 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-149000"
	I0717 20:05:25.460409 1101908 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-149000"
	W0717 20:05:25.460419 1101908 addons.go:240] addon storage-provisioner should already be in state true
	I0717 20:05:25.460516 1101908 host.go:66] Checking if "old-k8s-version-149000" exists ...
	I0717 20:05:25.460284 1101908 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-149000"
	I0717 20:05:25.460709 1101908 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-149000"
	W0717 20:05:25.460727 1101908 addons.go:240] addon metrics-server should already be in state true
	I0717 20:05:25.460294 1101908 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-149000"
	I0717 20:05:25.460771 1101908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-149000"
	I0717 20:05:25.460793 1101908 host.go:66] Checking if "old-k8s-version-149000" exists ...
	I0717 20:05:25.461033 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.461061 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.461100 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.461128 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.461201 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.461227 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.487047 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0717 20:05:25.487091 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44607
	I0717 20:05:25.487066 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I0717 20:05:25.487833 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.487898 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.487930 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.488571 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.488595 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.488597 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.488615 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.488632 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.488660 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.489058 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.489074 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.489135 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.489284 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.489635 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.489641 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.489654 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.489657 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.498029 1101908 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-149000"
	W0717 20:05:25.498058 1101908 addons.go:240] addon default-storageclass should already be in state true
	I0717 20:05:25.498092 1101908 host.go:66] Checking if "old-k8s-version-149000" exists ...
	I0717 20:05:25.498485 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.498527 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.506931 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0717 20:05:25.507478 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.508080 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.508109 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.508562 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.508845 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.510969 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 20:05:25.513078 1101908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:05:25.511340 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0717 20:05:25.515599 1101908 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:05:25.515626 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 20:05:25.515655 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 20:05:25.516012 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.516682 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.516709 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.517198 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.517438 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.519920 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.520835 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 20:05:25.521176 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 20:05:25.521204 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.523226 1101908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 20:05:22.775399 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:25.278740 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:25.521305 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 20:05:25.523448 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38723
	I0717 20:05:25.525260 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 20:05:25.525280 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 20:05:25.525310 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 20:05:25.525529 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 20:05:25.526263 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 20:05:25.526597 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 20:05:25.527369 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.528329 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.528357 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.528696 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.528792 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.529350 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.529381 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.529649 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 20:05:25.529655 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 20:05:25.529674 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.529823 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 20:05:25.529949 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 20:05:25.530088 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 20:05:25.552954 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I0717 20:05:25.553470 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.554117 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.554145 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.554521 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.554831 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.556872 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 20:05:25.557158 1101908 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 20:05:25.557183 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 20:05:25.557204 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 20:05:25.560114 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.560622 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 20:05:25.560656 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.561095 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 20:05:25.561350 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 20:05:25.561512 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 20:05:25.561749 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 20:05:25.724163 1101908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:05:25.749198 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 20:05:25.749231 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 20:05:25.754533 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 20:05:25.757518 1101908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 20:05:25.811831 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 20:05:25.811867 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 20:05:25.893143 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:05:25.893175 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 20:05:25.994781 1101908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:05:26.019864 1101908 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-149000" context rescaled to 1 replicas
	I0717 20:05:26.019914 1101908 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.177 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:05:26.022777 1101908 out.go:177] * Verifying Kubernetes components...
	I0717 20:05:26.025694 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:05:27.100226 1101908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.376005593s)
	I0717 20:05:27.100282 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100295 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.100306 1101908 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.345727442s)
	I0717 20:05:27.100343 1101908 start.go:917] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0717 20:05:27.100360 1101908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.342808508s)
	I0717 20:05:27.100411 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100426 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.100781 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.100799 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.100810 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100821 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.100866 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.100877 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.100876 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.100885 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100894 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.101035 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.101065 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.101100 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.101154 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.101170 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.101185 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.101195 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.101423 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.101441 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.101448 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.169038 1101908 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.143298277s)
	I0717 20:05:27.169095 1101908 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-149000" to be "Ready" ...
	I0717 20:05:27.169044 1101908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.174211865s)
	I0717 20:05:27.169278 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.169333 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.169672 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.169782 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.169814 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.169837 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.169758 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.171950 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.171960 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.171979 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.171992 1101908 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-149000"
	I0717 20:05:27.174411 1101908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 20:05:27.777543 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:30.276174 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:27.176695 1101908 addons.go:502] enable addons completed in 1.716545434s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 20:05:27.191392 1101908 node_ready.go:49] node "old-k8s-version-149000" has status "Ready":"True"
	I0717 20:05:27.191435 1101908 node_ready.go:38] duration metric: took 22.324367ms waiting for node "old-k8s-version-149000" to be "Ready" ...
	I0717 20:05:27.191450 1101908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:05:27.203011 1101908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:29.214694 1101908 pod_ready.go:102] pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:31.215215 1101908 pod_ready.go:92] pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace has status "Ready":"True"
	I0717 20:05:31.215244 1101908 pod_ready.go:81] duration metric: took 4.012199031s waiting for pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:31.215265 1101908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t4mmh" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:31.222461 1101908 pod_ready.go:92] pod "kube-proxy-t4mmh" in "kube-system" namespace has status "Ready":"True"
	I0717 20:05:31.222489 1101908 pod_ready.go:81] duration metric: took 7.215944ms waiting for pod "kube-proxy-t4mmh" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:31.222504 1101908 pod_ready.go:38] duration metric: took 4.031041761s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:05:31.222530 1101908 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:05:31.222606 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:05:31.239450 1101908 api_server.go:72] duration metric: took 5.21948786s to wait for apiserver process to appear ...
	I0717 20:05:31.239494 1101908 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:05:31.239520 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 20:05:31.247985 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 200:
	ok
	I0717 20:05:31.249351 1101908 api_server.go:141] control plane version: v1.16.0
	I0717 20:05:31.249383 1101908 api_server.go:131] duration metric: took 9.880729ms to wait for apiserver health ...
	I0717 20:05:31.249391 1101908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:05:31.255025 1101908 system_pods.go:59] 4 kube-system pods found
	I0717 20:05:31.255062 1101908 system_pods.go:61] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.255069 1101908 system_pods.go:61] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.255076 1101908 system_pods.go:61] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.255086 1101908 system_pods.go:61] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.255095 1101908 system_pods.go:74] duration metric: took 5.697473ms to wait for pod list to return data ...
	I0717 20:05:31.255106 1101908 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:05:31.259740 1101908 default_sa.go:45] found service account: "default"
	I0717 20:05:31.259772 1101908 default_sa.go:55] duration metric: took 4.660789ms for default service account to be created ...
	I0717 20:05:31.259780 1101908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:05:31.264000 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:31.264044 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.264051 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.264081 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.264093 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.264116 1101908 retry.go:31] will retry after 269.941707ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:31.540816 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:31.540865 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.540876 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.540891 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.540922 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.540951 1101908 retry.go:31] will retry after 335.890023ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:32.287639 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:34.776299 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:31.881678 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:31.881721 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.881731 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.881742 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.881754 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.881778 1101908 retry.go:31] will retry after 452.6849ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:32.340889 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:32.340919 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:32.340924 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:32.340931 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:32.340938 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:32.340954 1101908 retry.go:31] will retry after 433.94285ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:32.780743 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:32.780777 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:32.780784 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:32.780795 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:32.780808 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:32.780830 1101908 retry.go:31] will retry after 664.997213ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:33.450870 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:33.450901 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:33.450906 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:33.450912 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:33.450919 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:33.450936 1101908 retry.go:31] will retry after 669.043592ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:34.126116 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:34.126155 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:34.126164 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:34.126177 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:34.126187 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:34.126207 1101908 retry.go:31] will retry after 799.422303ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:34.930555 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:34.930595 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:34.930604 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:34.930614 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:34.930624 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:34.930648 1101908 retry.go:31] will retry after 1.329879988s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:36.266531 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:36.266570 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:36.266578 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:36.266586 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:36.266596 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:36.266616 1101908 retry.go:31] will retry after 1.667039225s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:37.275872 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:39.776283 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:37.940699 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:37.940736 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:37.940746 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:37.940756 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:37.940768 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:37.940793 1101908 retry.go:31] will retry after 1.426011935s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:39.371704 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:39.371738 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:39.371743 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:39.371750 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:39.371757 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:39.371775 1101908 retry.go:31] will retry after 2.864830097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:42.276143 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:44.775621 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:42.241652 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:42.241693 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:42.241701 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:42.241713 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:42.241723 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:42.241744 1101908 retry.go:31] will retry after 2.785860959s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:45.034761 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:45.034793 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:45.034798 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:45.034806 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:45.034818 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:45.034839 1101908 retry.go:31] will retry after 3.037872313s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:46.776795 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:49.276343 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:48.078790 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:48.078826 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:48.078831 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:48.078842 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:48.078849 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:48.078867 1101908 retry.go:31] will retry after 4.546196458s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:51.777942 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:54.274279 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:52.631941 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:52.631986 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:52.631995 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:52.632006 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:52.632017 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:52.632043 1101908 retry.go:31] will retry after 6.391777088s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:56.276359 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:58.277520 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:59.036918 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:59.036951 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:59.036956 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:59.036963 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:59.036970 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:59.036988 1101908 retry.go:31] will retry after 5.758521304s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:06:00.776149 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:03.276291 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:05.276530 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:04.801914 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:06:04.801944 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:06:04.801950 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:06:04.801958 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:06:04.801965 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:06:04.801982 1101908 retry.go:31] will retry after 7.046104479s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:06:07.777447 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:10.275741 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:12.776577 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:14.776717 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:11.856116 1101908 system_pods.go:86] 8 kube-system pods found
	I0717 20:06:11.856165 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:06:11.856175 1101908 system_pods.go:89] "etcd-old-k8s-version-149000" [702c8e9f-d99a-4766-af97-550dc956f093] Pending
	I0717 20:06:11.856183 1101908 system_pods.go:89] "kube-apiserver-old-k8s-version-149000" [0f0c9817-f4c9-4266-b576-c270cea11b4b] Pending
	I0717 20:06:11.856191 1101908 system_pods.go:89] "kube-controller-manager-old-k8s-version-149000" [539db0c4-6e8c-42eb-9b73-686de5f6c7bf] Running
	I0717 20:06:11.856207 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:06:11.856216 1101908 system_pods.go:89] "kube-scheduler-old-k8s-version-149000" [5a27a0f7-c6c9-4324-a51c-d33c205d8724] Running
	I0717 20:06:11.856295 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:06:11.856308 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:06:11.856336 1101908 retry.go:31] will retry after 13.224383762s: missing components: etcd, kube-apiserver
	I0717 20:06:16.779816 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:19.275840 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:25.091227 1101908 system_pods.go:86] 8 kube-system pods found
	I0717 20:06:25.091272 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:06:25.091281 1101908 system_pods.go:89] "etcd-old-k8s-version-149000" [702c8e9f-d99a-4766-af97-550dc956f093] Running
	I0717 20:06:25.091288 1101908 system_pods.go:89] "kube-apiserver-old-k8s-version-149000" [0f0c9817-f4c9-4266-b576-c270cea11b4b] Running
	I0717 20:06:25.091298 1101908 system_pods.go:89] "kube-controller-manager-old-k8s-version-149000" [539db0c4-6e8c-42eb-9b73-686de5f6c7bf] Running
	I0717 20:06:25.091305 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:06:25.091312 1101908 system_pods.go:89] "kube-scheduler-old-k8s-version-149000" [5a27a0f7-c6c9-4324-a51c-d33c205d8724] Running
	I0717 20:06:25.091324 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:06:25.091337 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:06:25.091348 1101908 system_pods.go:126] duration metric: took 53.831561334s to wait for k8s-apps to be running ...
	I0717 20:06:25.091360 1101908 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:06:25.091455 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:06:25.119739 1101908 system_svc.go:56] duration metric: took 28.348212ms WaitForService to wait for kubelet.
	I0717 20:06:25.119804 1101908 kubeadm.go:581] duration metric: took 59.099852409s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:06:25.119854 1101908 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:06:25.123561 1101908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:06:25.123592 1101908 node_conditions.go:123] node cpu capacity is 2
	I0717 20:06:25.123606 1101908 node_conditions.go:105] duration metric: took 3.739793ms to run NodePressure ...
	I0717 20:06:25.123618 1101908 start.go:228] waiting for startup goroutines ...
	I0717 20:06:25.123624 1101908 start.go:233] waiting for cluster config update ...
	I0717 20:06:25.123669 1101908 start.go:242] writing updated cluster config ...
	I0717 20:06:25.124104 1101908 ssh_runner.go:195] Run: rm -f paused
	I0717 20:06:25.182838 1101908 start.go:578] kubectl: 1.27.3, cluster: 1.16.0 (minor skew: 11)
	I0717 20:06:25.185766 1101908 out.go:177] 
	W0717 20:06:25.188227 1101908 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.16.0.
	I0717 20:06:25.190452 1101908 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0717 20:06:25.192660 1101908 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-149000" cluster and "default" namespace by default
	I0717 20:06:21.776152 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:23.776276 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:25.781589 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:28.278450 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:30.775293 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:33.276069 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:35.775650 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:37.777006 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:40.275701 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:42.774969 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:44.775928 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:46.776363 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:48.786345 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:51.276618 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:53.776161 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:56.276037 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:58.276310 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:00.276357 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:02.775722 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:04.775945 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:07.280130 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:09.776589 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:12.277066 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:14.775525 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:17.275601 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:19.777143 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:22.286857 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:24.775908 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:26.779341 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:29.275732 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:31.276783 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:33.776286 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:36.274383 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:38.275384 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:40.775469 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:42.776331 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:44.776843 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:47.276067 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:49.276907 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:51.277652 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:53.776315 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:55.780034 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:58.276277 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:00.776903 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:03.276429 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:05.277182 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:07.776330 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:09.777528 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:12.275388 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:14.275926 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:16.776757 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:19.276466 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:21.276544 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:23.775888 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:25.778534 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:28.277897 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:30.775389 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:32.777134 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:34.777503 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:37.276492 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:39.775380 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:41.777135 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:44.276305 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:45.868652 1103141 pod_ready.go:81] duration metric: took 4m0.000895459s waiting for pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace to be "Ready" ...
	E0717 20:08:45.868703 1103141 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:08:45.868714 1103141 pod_ready.go:38] duration metric: took 4m2.373683506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:08:45.868742 1103141 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:08:45.868791 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:08:45.868907 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:08:45.926927 1103141 cri.go:89] found id: "b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:45.926965 1103141 cri.go:89] found id: ""
	I0717 20:08:45.926977 1103141 logs.go:284] 1 containers: [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f]
	I0717 20:08:45.927049 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:45.932247 1103141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:08:45.932335 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:08:45.976080 1103141 cri.go:89] found id: "6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:45.976176 1103141 cri.go:89] found id: ""
	I0717 20:08:45.976200 1103141 logs.go:284] 1 containers: [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8]
	I0717 20:08:45.976287 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:45.981650 1103141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:08:45.981738 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:08:46.017454 1103141 cri.go:89] found id: "9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:46.017487 1103141 cri.go:89] found id: ""
	I0717 20:08:46.017495 1103141 logs.go:284] 1 containers: [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e]
	I0717 20:08:46.017578 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.023282 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:08:46.023361 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:08:46.055969 1103141 cri.go:89] found id: "20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:46.055998 1103141 cri.go:89] found id: ""
	I0717 20:08:46.056009 1103141 logs.go:284] 1 containers: [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52]
	I0717 20:08:46.056063 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.061090 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:08:46.061181 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:08:46.094968 1103141 cri.go:89] found id: "c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:46.095001 1103141 cri.go:89] found id: ""
	I0717 20:08:46.095012 1103141 logs.go:284] 1 containers: [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39]
	I0717 20:08:46.095089 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.099940 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:08:46.100018 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:08:46.132535 1103141 cri.go:89] found id: "7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:46.132571 1103141 cri.go:89] found id: ""
	I0717 20:08:46.132586 1103141 logs.go:284] 1 containers: [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd]
	I0717 20:08:46.132655 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.138029 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:08:46.138112 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:08:46.179589 1103141 cri.go:89] found id: ""
	I0717 20:08:46.179620 1103141 logs.go:284] 0 containers: []
	W0717 20:08:46.179632 1103141 logs.go:286] No container was found matching "kindnet"
	I0717 20:08:46.179640 1103141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:08:46.179728 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:08:46.216615 1103141 cri.go:89] found id: "1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:46.216642 1103141 cri.go:89] found id: ""
	I0717 20:08:46.216650 1103141 logs.go:284] 1 containers: [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea]
	I0717 20:08:46.216782 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.223815 1103141 logs.go:123] Gathering logs for kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] ...
	I0717 20:08:46.223849 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:46.274046 1103141 logs.go:123] Gathering logs for kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] ...
	I0717 20:08:46.274093 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:46.314239 1103141 logs.go:123] Gathering logs for kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] ...
	I0717 20:08:46.314285 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:46.372521 1103141 logs.go:123] Gathering logs for kubelet ...
	I0717 20:08:46.372568 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:08:46.473516 1103141 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:08:46.473576 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:08:46.628553 1103141 logs.go:123] Gathering logs for coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] ...
	I0717 20:08:46.628626 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:46.663929 1103141 logs.go:123] Gathering logs for storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] ...
	I0717 20:08:46.663976 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:46.699494 1103141 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:08:46.699528 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:08:47.188357 1103141 logs.go:123] Gathering logs for container status ...
	I0717 20:08:47.188415 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:08:47.246863 1103141 logs.go:123] Gathering logs for dmesg ...
	I0717 20:08:47.246902 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:08:47.262383 1103141 logs.go:123] Gathering logs for kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] ...
	I0717 20:08:47.262418 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:47.315465 1103141 logs.go:123] Gathering logs for etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] ...
	I0717 20:08:47.315506 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:49.862911 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:08:49.880685 1103141 api_server.go:72] duration metric: took 4m9.416465331s to wait for apiserver process to appear ...
	I0717 20:08:49.880717 1103141 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:08:49.880763 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:08:49.880828 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:08:49.921832 1103141 cri.go:89] found id: "b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:49.921858 1103141 cri.go:89] found id: ""
	I0717 20:08:49.921867 1103141 logs.go:284] 1 containers: [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f]
	I0717 20:08:49.921922 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:49.927202 1103141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:08:49.927281 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:08:49.962760 1103141 cri.go:89] found id: "6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:49.962784 1103141 cri.go:89] found id: ""
	I0717 20:08:49.962793 1103141 logs.go:284] 1 containers: [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8]
	I0717 20:08:49.962850 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:49.968029 1103141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:08:49.968123 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:08:50.004191 1103141 cri.go:89] found id: "9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:50.004230 1103141 cri.go:89] found id: ""
	I0717 20:08:50.004239 1103141 logs.go:284] 1 containers: [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e]
	I0717 20:08:50.004308 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.009150 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:08:50.009223 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:08:50.041085 1103141 cri.go:89] found id: "20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:50.041109 1103141 cri.go:89] found id: ""
	I0717 20:08:50.041118 1103141 logs.go:284] 1 containers: [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52]
	I0717 20:08:50.041170 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.045541 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:08:50.045632 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:08:50.082404 1103141 cri.go:89] found id: "c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:50.082439 1103141 cri.go:89] found id: ""
	I0717 20:08:50.082448 1103141 logs.go:284] 1 containers: [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39]
	I0717 20:08:50.082510 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.087838 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:08:50.087928 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:08:50.130019 1103141 cri.go:89] found id: "7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:50.130053 1103141 cri.go:89] found id: ""
	I0717 20:08:50.130065 1103141 logs.go:284] 1 containers: [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd]
	I0717 20:08:50.130134 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.134894 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:08:50.134974 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:08:50.171033 1103141 cri.go:89] found id: ""
	I0717 20:08:50.171070 1103141 logs.go:284] 0 containers: []
	W0717 20:08:50.171081 1103141 logs.go:286] No container was found matching "kindnet"
	I0717 20:08:50.171088 1103141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:08:50.171158 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:08:50.206952 1103141 cri.go:89] found id: "1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:50.206984 1103141 cri.go:89] found id: ""
	I0717 20:08:50.206996 1103141 logs.go:284] 1 containers: [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea]
	I0717 20:08:50.207064 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.211123 1103141 logs.go:123] Gathering logs for etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] ...
	I0717 20:08:50.211152 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:50.257982 1103141 logs.go:123] Gathering logs for coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] ...
	I0717 20:08:50.258031 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:50.293315 1103141 logs.go:123] Gathering logs for kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] ...
	I0717 20:08:50.293371 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:50.343183 1103141 logs.go:123] Gathering logs for kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] ...
	I0717 20:08:50.343235 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:50.381821 1103141 logs.go:123] Gathering logs for kubelet ...
	I0717 20:08:50.381869 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:08:50.487833 1103141 logs.go:123] Gathering logs for dmesg ...
	I0717 20:08:50.487878 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:08:50.504213 1103141 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:08:50.504259 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:08:50.638194 1103141 logs.go:123] Gathering logs for kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] ...
	I0717 20:08:50.638230 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:50.685572 1103141 logs.go:123] Gathering logs for kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] ...
	I0717 20:08:50.685627 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:50.740133 1103141 logs.go:123] Gathering logs for storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] ...
	I0717 20:08:50.740188 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:50.778023 1103141 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:08:50.778059 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:08:51.310702 1103141 logs.go:123] Gathering logs for container status ...
	I0717 20:08:51.310758 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:08:53.857949 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 20:08:53.864729 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0717 20:08:53.866575 1103141 api_server.go:141] control plane version: v1.27.3
	I0717 20:08:53.866605 1103141 api_server.go:131] duration metric: took 3.985881495s to wait for apiserver health ...
	I0717 20:08:53.866613 1103141 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:08:53.866638 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:08:53.866687 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:08:53.902213 1103141 cri.go:89] found id: "b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:53.902243 1103141 cri.go:89] found id: ""
	I0717 20:08:53.902252 1103141 logs.go:284] 1 containers: [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f]
	I0717 20:08:53.902320 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:53.906976 1103141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:08:53.907073 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:08:53.946040 1103141 cri.go:89] found id: "6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:53.946063 1103141 cri.go:89] found id: ""
	I0717 20:08:53.946071 1103141 logs.go:284] 1 containers: [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8]
	I0717 20:08:53.946150 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:53.951893 1103141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:08:53.951963 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:08:53.988546 1103141 cri.go:89] found id: "9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:53.988583 1103141 cri.go:89] found id: ""
	I0717 20:08:53.988594 1103141 logs.go:284] 1 containers: [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e]
	I0717 20:08:53.988647 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:53.994338 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:08:53.994428 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:08:54.030092 1103141 cri.go:89] found id: "20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:54.030123 1103141 cri.go:89] found id: ""
	I0717 20:08:54.030133 1103141 logs.go:284] 1 containers: [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52]
	I0717 20:08:54.030198 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.035081 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:08:54.035189 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:08:54.069845 1103141 cri.go:89] found id: "c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:54.069878 1103141 cri.go:89] found id: ""
	I0717 20:08:54.069889 1103141 logs.go:284] 1 containers: [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39]
	I0717 20:08:54.069952 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.075257 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:08:54.075334 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:08:54.114477 1103141 cri.go:89] found id: "7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:54.114516 1103141 cri.go:89] found id: ""
	I0717 20:08:54.114527 1103141 logs.go:284] 1 containers: [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd]
	I0717 20:08:54.114602 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.119374 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:08:54.119464 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:08:54.160628 1103141 cri.go:89] found id: ""
	I0717 20:08:54.160660 1103141 logs.go:284] 0 containers: []
	W0717 20:08:54.160672 1103141 logs.go:286] No container was found matching "kindnet"
	I0717 20:08:54.160680 1103141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:08:54.160752 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:08:54.200535 1103141 cri.go:89] found id: "1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:54.200662 1103141 cri.go:89] found id: ""
	I0717 20:08:54.200674 1103141 logs.go:284] 1 containers: [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea]
	I0717 20:08:54.200736 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.205923 1103141 logs.go:123] Gathering logs for dmesg ...
	I0717 20:08:54.205958 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:08:54.221020 1103141 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:08:54.221057 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:08:54.381122 1103141 logs.go:123] Gathering logs for coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] ...
	I0717 20:08:54.381163 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:54.417207 1103141 logs.go:123] Gathering logs for kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] ...
	I0717 20:08:54.417255 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:54.469346 1103141 logs.go:123] Gathering logs for kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] ...
	I0717 20:08:54.469389 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:54.513216 1103141 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:08:54.513258 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:08:55.056597 1103141 logs.go:123] Gathering logs for kubelet ...
	I0717 20:08:55.056644 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:08:55.168622 1103141 logs.go:123] Gathering logs for kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] ...
	I0717 20:08:55.168669 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:55.220979 1103141 logs.go:123] Gathering logs for etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] ...
	I0717 20:08:55.221038 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:55.264086 1103141 logs.go:123] Gathering logs for kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] ...
	I0717 20:08:55.264124 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:55.317931 1103141 logs.go:123] Gathering logs for storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] ...
	I0717 20:08:55.317974 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:55.357733 1103141 logs.go:123] Gathering logs for container status ...
	I0717 20:08:55.357770 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:08:57.919739 1103141 system_pods.go:59] 8 kube-system pods found
	I0717 20:08:57.919785 1103141 system_pods.go:61] "coredns-5d78c9869d-gq2b2" [833e67fa-16e2-4a5c-8c39-16cc4fbd411e] Running
	I0717 20:08:57.919795 1103141 system_pods.go:61] "etcd-embed-certs-114855" [7209c449-fbf1-4343-8636-e872684db832] Running
	I0717 20:08:57.919808 1103141 system_pods.go:61] "kube-apiserver-embed-certs-114855" [d926dfc1-71e8-44cb-9efe-4c37e0982b02] Running
	I0717 20:08:57.919817 1103141 system_pods.go:61] "kube-controller-manager-embed-certs-114855" [e16de906-3b66-4882-83ca-8d5476d45d96] Running
	I0717 20:08:57.919823 1103141 system_pods.go:61] "kube-proxy-bfvnl" [6f7fb55d-fa9f-4d08-b4ab-3814af550c01] Running
	I0717 20:08:57.919830 1103141 system_pods.go:61] "kube-scheduler-embed-certs-114855" [828c7a2f-dd4b-4318-8199-026970bb3159] Running
	I0717 20:08:57.919850 1103141 system_pods.go:61] "metrics-server-74d5c6b9c-jvfz8" [f861e320-9125-4081-b043-c90d8b027f71] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:08:57.919859 1103141 system_pods.go:61] "storage-provisioner" [994ec0db-08aa-4dd5-a137-1f6984051e65] Running
	I0717 20:08:57.919866 1103141 system_pods.go:74] duration metric: took 4.053247674s to wait for pod list to return data ...
	I0717 20:08:57.919876 1103141 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:08:57.925726 1103141 default_sa.go:45] found service account: "default"
	I0717 20:08:57.925756 1103141 default_sa.go:55] duration metric: took 5.874288ms for default service account to be created ...
	I0717 20:08:57.925765 1103141 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:08:57.934835 1103141 system_pods.go:86] 8 kube-system pods found
	I0717 20:08:57.934869 1103141 system_pods.go:89] "coredns-5d78c9869d-gq2b2" [833e67fa-16e2-4a5c-8c39-16cc4fbd411e] Running
	I0717 20:08:57.934875 1103141 system_pods.go:89] "etcd-embed-certs-114855" [7209c449-fbf1-4343-8636-e872684db832] Running
	I0717 20:08:57.934880 1103141 system_pods.go:89] "kube-apiserver-embed-certs-114855" [d926dfc1-71e8-44cb-9efe-4c37e0982b02] Running
	I0717 20:08:57.934886 1103141 system_pods.go:89] "kube-controller-manager-embed-certs-114855" [e16de906-3b66-4882-83ca-8d5476d45d96] Running
	I0717 20:08:57.934890 1103141 system_pods.go:89] "kube-proxy-bfvnl" [6f7fb55d-fa9f-4d08-b4ab-3814af550c01] Running
	I0717 20:08:57.934894 1103141 system_pods.go:89] "kube-scheduler-embed-certs-114855" [828c7a2f-dd4b-4318-8199-026970bb3159] Running
	I0717 20:08:57.934903 1103141 system_pods.go:89] "metrics-server-74d5c6b9c-jvfz8" [f861e320-9125-4081-b043-c90d8b027f71] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:08:57.934908 1103141 system_pods.go:89] "storage-provisioner" [994ec0db-08aa-4dd5-a137-1f6984051e65] Running
	I0717 20:08:57.934917 1103141 system_pods.go:126] duration metric: took 9.146607ms to wait for k8s-apps to be running ...
	I0717 20:08:57.934924 1103141 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:08:57.934972 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:08:57.952480 1103141 system_svc.go:56] duration metric: took 17.537719ms WaitForService to wait for kubelet.
	I0717 20:08:57.952531 1103141 kubeadm.go:581] duration metric: took 4m17.48831739s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:08:57.952581 1103141 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:08:57.956510 1103141 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:08:57.956581 1103141 node_conditions.go:123] node cpu capacity is 2
	I0717 20:08:57.956599 1103141 node_conditions.go:105] duration metric: took 4.010178ms to run NodePressure ...
	I0717 20:08:57.956633 1103141 start.go:228] waiting for startup goroutines ...
	I0717 20:08:57.956646 1103141 start.go:233] waiting for cluster config update ...
	I0717 20:08:57.956665 1103141 start.go:242] writing updated cluster config ...
	I0717 20:08:57.957107 1103141 ssh_runner.go:195] Run: rm -f paused
	I0717 20:08:58.016891 1103141 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:08:58.019566 1103141 out.go:177] * Done! kubectl is now configured to use "embed-certs-114855" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 19:58:30 UTC, ends at Mon 2023-07-17 20:12:31 UTC. --
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.020693968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b86b4154-5639-4974-b47d-d5e021e89c6f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.176558829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d70c44b7-8dbf-4475-a910-863a798a2031 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.176665663Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d70c44b7-8dbf-4475-a910-863a798a2031 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.176896087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d70c44b7-8dbf-4475-a910-863a798a2031 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.217886485Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fd7226e4-b99f-43d2-aba9-d4084ba91f5a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.217983959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fd7226e4-b99f-43d2-aba9-d4084ba91f5a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.218225672Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fd7226e4-b99f-43d2-aba9-d4084ba91f5a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.257328709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=67298627-19a8-460d-a110-89f53ab149ea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.257510086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=67298627-19a8-460d-a110-89f53ab149ea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.257726369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=67298627-19a8-460d-a110-89f53ab149ea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.293264105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9cf3f5be-53f5-4a5c-b0ba-90dff96fafa3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.293362350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9cf3f5be-53f5-4a5c-b0ba-90dff96fafa3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.293685004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9cf3f5be-53f5-4a5c-b0ba-90dff96fafa3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.332167955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=42a97aa1-ace5-43c2-9692-16aaed490cb0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.332264734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=42a97aa1-ace5-43c2-9692-16aaed490cb0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.332528855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=42a97aa1-ace5-43c2-9692-16aaed490cb0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.370832855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=844d20fa-9760-46fb-9504-358b71989802 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.370926278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=844d20fa-9760-46fb-9504-358b71989802 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.371157454Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=844d20fa-9760-46fb-9504-358b71989802 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.419053317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=511ef2d3-018f-429c-9817-bc54770b3e96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.419210800Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=511ef2d3-018f-429c-9817-bc54770b3e96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.419480067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=511ef2d3-018f-429c-9817-bc54770b3e96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.454127797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fd074c16-d3cf-4b7d-9c40-e995cb742dbe name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.454226818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fd074c16-d3cf-4b7d-9c40-e995cb742dbe name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:12:31 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:12:31.454491639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fd074c16-d3cf-4b7d-9c40-e995cb742dbe name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	19f50eeeb11e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   a4576d8c26780
	5ac703c93251a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   61887e5f4fc14
	cb8cdd2d3f50b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   0775100a29b1e
	4a47132787243       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   a4576d8c26780
	76ea7912be2a5       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      13 minutes ago      Running             kube-proxy                1                   acc4e1b28ae1c
	9790a6abc4658       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      13 minutes ago      Running             kube-scheduler            1                   049c6810e8b18
	bb86b8e5369c2       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      13 minutes ago      Running             etcd                      1                   acc8fd3a37667
	280d9b31ea5e8       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      13 minutes ago      Running             kube-controller-manager   1                   a6fea49e65dd1
	210ff04a86d98       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      13 minutes ago      Running             kube-apiserver            1                   60e19ea94bfee
	
	* 
	* ==> coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37471 - 39720 "HINFO IN 1862711990285091975.8658787963171313958. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010230826s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-711413
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-711413
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=default-k8s-diff-port-711413
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T19_50_25_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:50:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-711413
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 20:12:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 20:09:48 +0000   Mon, 17 Jul 2023 19:50:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 20:09:48 +0000   Mon, 17 Jul 2023 19:50:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 20:09:48 +0000   Mon, 17 Jul 2023 19:50:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 20:09:48 +0000   Mon, 17 Jul 2023 19:59:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.51
	  Hostname:    default-k8s-diff-port-711413
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 98afff5ed7e644b585b6493e16507063
	  System UUID:                98afff5e-d7e6-44b5-85b6-493e16507063
	  Boot ID:                    7d80f073-64da-4970-ac03-47f3d9fd982d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5d78c9869d-rjqsv                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-711413                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-default-k8s-diff-port-711413             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-711413    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-9qfpg                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-711413             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-74d5c6b9c-hzcd7                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node default-k8s-diff-port-711413 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node default-k8s-diff-port-711413 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node default-k8s-diff-port-711413 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node default-k8s-diff-port-711413 status is now: NodeReady
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-711413 event: Registered Node default-k8s-diff-port-711413 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-711413 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-711413 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-711413 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-711413 event: Registered Node default-k8s-diff-port-711413 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul17 19:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074877] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.451778] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.692009] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.159967] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.522211] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.314538] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.157240] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.174828] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.135536] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.262036] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +17.346326] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[Jul17 19:59] kauditd_printk_skb: 29 callbacks suppressed
	
	* 
	* ==> etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] <==
	* {"level":"info","ts":"2023-07-17T19:59:11.433Z","caller":"traceutil/trace.go:171","msg":"trace[540452764] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-711413; range_end:; response_count:1; response_revision:546; }","duration":"316.111345ms","start":"2023-07-17T19:59:11.116Z","end":"2023-07-17T19:59:11.433Z","steps":["trace[540452764] 'range keys from in-memory index tree'  (duration: 315.891029ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.433Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.116Z","time spent":"316.382059ms","remote":"127.0.0.1:33154","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5750,"request content":"key:\"/registry/minions/default-k8s-diff-port-711413\" "}
	{"level":"warn","ts":"2023-07-17T19:59:11.433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"316.757012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5182"}
	{"level":"info","ts":"2023-07-17T19:59:11.433Z","caller":"traceutil/trace.go:171","msg":"trace[800873597] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:546; }","duration":"316.81579ms","start":"2023-07-17T19:59:11.116Z","end":"2023-07-17T19:59:11.433Z","steps":["trace[800873597] 'range keys from in-memory index tree'  (duration: 312.136974ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.433Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.116Z","time spent":"316.890878ms","remote":"127.0.0.1:33220","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":5205,"request content":"key:\"/registry/deployments/kube-system/metrics-server\" "}
	{"level":"info","ts":"2023-07-17T19:59:35.549Z","caller":"traceutil/trace.go:171","msg":"trace[1926204832] linearizableReadLoop","detail":"{readStateIndex:624; appliedIndex:623; }","duration":"134.402605ms","start":"2023-07-17T19:59:35.414Z","end":"2023-07-17T19:59:35.549Z","steps":["trace[1926204832] 'read index received'  (duration: 134.183617ms)","trace[1926204832] 'applied index is now lower than readState.Index'  (duration: 218.188µs)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T19:59:35.549Z","caller":"traceutil/trace.go:171","msg":"trace[1529739564] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"262.229499ms","start":"2023-07-17T19:59:35.287Z","end":"2023-07-17T19:59:35.549Z","steps":["trace[1529739564] 'process raft request'  (duration: 261.552856ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:35.549Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.303503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T19:59:35.554Z","caller":"traceutil/trace.go:171","msg":"trace[491444416] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:582; }","duration":"138.6493ms","start":"2023-07-17T19:59:35.415Z","end":"2023-07-17T19:59:35.554Z","steps":["trace[491444416] 'agreement among raft nodes before linearized reading'  (duration: 134.246074ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:35.550Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.174474ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-74d5c6b9c-hzcd7\" ","response":"range_response_count:1 size:4031"}
	{"level":"info","ts":"2023-07-17T19:59:35.554Z","caller":"traceutil/trace.go:171","msg":"trace[704280281] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-74d5c6b9c-hzcd7; range_end:; response_count:1; response_revision:582; }","duration":"139.70707ms","start":"2023-07-17T19:59:35.414Z","end":"2023-07-17T19:59:35.554Z","steps":["trace[704280281] 'agreement among raft nodes before linearized reading'  (duration: 135.090004ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:36.198Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:35.702Z","time spent":"496.15954ms","remote":"127.0.0.1:33462","response type":"/etcdserverpb.Maintenance/Status","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2023-07-17T19:59:36.198Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.542882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:339"}
	{"level":"info","ts":"2023-07-17T19:59:36.198Z","caller":"traceutil/trace.go:171","msg":"trace[2135310627] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:582; }","duration":"334.688173ms","start":"2023-07-17T19:59:35.863Z","end":"2023-07-17T19:59:36.198Z","steps":["trace[2135310627] 'range keys from in-memory index tree'  (duration: 333.774779ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:36.198Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:35.863Z","time spent":"334.788802ms","remote":"127.0.0.1:33150","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":362,"request content":"key:\"/registry/namespaces/default\" "}
	{"level":"warn","ts":"2023-07-17T19:59:36.198Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.844295ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-74d5c6b9c-hzcd7\" ","response":"range_response_count:1 size:4031"}
	{"level":"info","ts":"2023-07-17T19:59:36.198Z","caller":"traceutil/trace.go:171","msg":"trace[2022411069] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-74d5c6b9c-hzcd7; range_end:; response_count:1; response_revision:582; }","duration":"287.527566ms","start":"2023-07-17T19:59:35.911Z","end":"2023-07-17T19:59:36.198Z","steps":["trace[2022411069] 'range keys from in-memory index tree'  (duration: 286.683996ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:36.586Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.408966ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16714698135784185315 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.51\" mod_revision:579 > success:<request_put:<key:\"/registry/masterleases/192.168.72.51\" value_size:66 lease:7491326098929409505 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.51\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-07-17T19:59:36.586Z","caller":"traceutil/trace.go:171","msg":"trace[2013416461] linearizableReadLoop","detail":"{readStateIndex:626; appliedIndex:625; }","duration":"173.39174ms","start":"2023-07-17T19:59:36.413Z","end":"2023-07-17T19:59:36.586Z","steps":["trace[2013416461] 'read index received'  (duration: 43.291421ms)","trace[2013416461] 'applied index is now lower than readState.Index'  (duration: 130.098961ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T19:59:36.586Z","caller":"traceutil/trace.go:171","msg":"trace[323324872] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"255.299079ms","start":"2023-07-17T19:59:36.331Z","end":"2023-07-17T19:59:36.586Z","steps":["trace[323324872] 'process raft request'  (duration: 125.137803ms)","trace[323324872] 'compare'  (duration: 129.313232ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T19:59:36.586Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.550501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-74d5c6b9c-hzcd7\" ","response":"range_response_count:1 size:4031"}
	{"level":"info","ts":"2023-07-17T19:59:36.586Z","caller":"traceutil/trace.go:171","msg":"trace[629697079] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-74d5c6b9c-hzcd7; range_end:; response_count:1; response_revision:583; }","duration":"173.676636ms","start":"2023-07-17T19:59:36.413Z","end":"2023-07-17T19:59:36.586Z","steps":["trace[629697079] 'agreement among raft nodes before linearized reading'  (duration: 173.433372ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T20:09:01.770Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":816}
	{"level":"info","ts":"2023-07-17T20:09:01.772Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":816,"took":"1.853843ms","hash":3863156796}
	{"level":"info","ts":"2023-07-17T20:09:01.772Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3863156796,"revision":816,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  20:12:31 up 14 min,  0 users,  load average: 0.13, 0.22, 0.20
	Linux default-k8s-diff-port-711413 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] <==
	* E0717 20:09:05.052182       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	E0717 20:09:05.052270       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:09:05.052287       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:09:05.053360       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:10:03.767709       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.30.24:443: connect: connection refused
	I0717 20:10:03.767818       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 20:10:05.052742       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:10:05.052982       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:10:05.053026       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 20:10:05.054039       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:10:05.054134       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:10:05.054185       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:11:03.767178       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.30.24:443: connect: connection refused
	I0717 20:11:03.767244       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 20:12:03.767152       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.30.24:443: connect: connection refused
	I0717 20:12:03.767278       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 20:12:05.054095       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:12:05.054457       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:12:05.054567       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 20:12:05.054460       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:12:05.054832       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:12:05.056041       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] <==
	* W0717 20:06:17.464564       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:06:46.990473       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:06:47.474568       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:07:16.996715       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:07:17.483007       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:07:47.002466       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:07:47.492235       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:08:17.009313       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:08:17.503335       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:08:47.015774       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:08:47.513285       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:09:17.023838       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:09:17.522577       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:09:47.029626       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:09:47.531858       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:10:17.037711       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:10:17.542197       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:10:47.044048       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:10:47.554268       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:11:17.050896       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:11:17.564091       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:11:47.058493       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:11:47.574301       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:12:17.066345       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:12:17.586931       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] <==
	* I0717 19:59:08.210790       1 node.go:141] Successfully retrieved node IP: 192.168.72.51
	I0717 19:59:08.211295       1 server_others.go:110] "Detected node IP" address="192.168.72.51"
	I0717 19:59:08.211519       1 server_others.go:554] "Using iptables proxy"
	I0717 19:59:08.355356       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 19:59:08.355524       1 server_others.go:192] "Using iptables Proxier"
	I0717 19:59:08.355604       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:59:08.356957       1 server.go:658] "Version info" version="v1.27.3"
	I0717 19:59:08.357111       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:59:08.358242       1 config.go:188] "Starting service config controller"
	I0717 19:59:08.358642       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 19:59:08.359245       1 config.go:97] "Starting endpoint slice config controller"
	I0717 19:59:08.397649       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 19:59:08.397701       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 19:59:08.360028       1 config.go:315] "Starting node config controller"
	I0717 19:59:08.397746       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 19:59:08.398001       1 shared_informer.go:318] Caches are synced for node config
	I0717 19:59:08.496231       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] <==
	* W0717 19:59:04.027922       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 19:59:04.027937       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 19:59:04.028112       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 19:59:04.028131       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 19:59:04.033778       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 19:59:04.033851       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 19:59:04.038049       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 19:59:04.038120       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 19:59:04.038217       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 19:59:04.038231       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 19:59:04.038270       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 19:59:04.038279       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 19:59:04.038333       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 19:59:04.038342       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 19:59:04.038386       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 19:59:04.038477       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 19:59:04.038491       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 19:59:04.038500       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 19:59:04.038508       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 19:59:04.038519       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 19:59:04.038673       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 19:59:04.038686       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:59:04.046828       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 19:59:04.046917       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0717 19:59:05.202572       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 19:58:30 UTC, ends at Mon 2023-07-17 20:12:32 UTC. --
	Jul 17 20:09:45 default-k8s-diff-port-711413 kubelet[917]: E0717 20:09:45.842301     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:09:56 default-k8s-diff-port-711413 kubelet[917]: E0717 20:09:56.838311     917 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:09:56 default-k8s-diff-port-711413 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:09:56 default-k8s-diff-port-711413 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:09:56 default-k8s-diff-port-711413 kubelet[917]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:09:58 default-k8s-diff-port-711413 kubelet[917]: E0717 20:09:58.820358     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:10:13 default-k8s-diff-port-711413 kubelet[917]: E0717 20:10:13.817217     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:10:25 default-k8s-diff-port-711413 kubelet[917]: E0717 20:10:25.816757     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:10:40 default-k8s-diff-port-711413 kubelet[917]: E0717 20:10:40.817064     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:10:52 default-k8s-diff-port-711413 kubelet[917]: E0717 20:10:52.817270     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:10:56 default-k8s-diff-port-711413 kubelet[917]: E0717 20:10:56.839909     917 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:10:56 default-k8s-diff-port-711413 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:10:56 default-k8s-diff-port-711413 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:10:56 default-k8s-diff-port-711413 kubelet[917]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:11:06 default-k8s-diff-port-711413 kubelet[917]: E0717 20:11:06.818758     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:11:18 default-k8s-diff-port-711413 kubelet[917]: E0717 20:11:18.817946     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:11:33 default-k8s-diff-port-711413 kubelet[917]: E0717 20:11:33.816382     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:11:47 default-k8s-diff-port-711413 kubelet[917]: E0717 20:11:47.817234     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:11:56 default-k8s-diff-port-711413 kubelet[917]: E0717 20:11:56.834044     917 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:11:56 default-k8s-diff-port-711413 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:11:56 default-k8s-diff-port-711413 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:11:56 default-k8s-diff-port-711413 kubelet[917]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:12:02 default-k8s-diff-port-711413 kubelet[917]: E0717 20:12:02.817209     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:12:15 default-k8s-diff-port-711413 kubelet[917]: E0717 20:12:15.818636     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:12:29 default-k8s-diff-port-711413 kubelet[917]: E0717 20:12:29.817369     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	
	* 
	* ==> storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] <==
	* I0717 19:59:38.240874       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:59:38.253804       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:59:38.253902       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 19:59:55.678479       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 19:59:55.678863       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-711413_fadc3d2d-e6d2-4a65-a4d5-0c0e40183736!
	I0717 19:59:55.679128       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"16b5bc07-9934-4f7c-b344-8a0ca0c9f59e", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-711413_fadc3d2d-e6d2-4a65-a4d5-0c0e40183736 became leader
	I0717 19:59:55.780258       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-711413_fadc3d2d-e6d2-4a65-a4d5-0c0e40183736!
	
	* 
	* ==> storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] <==
	* I0717 19:59:07.997756       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 19:59:38.000516       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-711413 -n default-k8s-diff-port-711413
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-711413 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-hzcd7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-711413 describe pod metrics-server-74d5c6b9c-hzcd7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-711413 describe pod metrics-server-74d5c6b9c-hzcd7: exit status 1 (71.748783ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-hzcd7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-711413 describe pod metrics-server-74d5c6b9c-hzcd7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 20:08:01.330421 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-149000 -n old-k8s-version-149000
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-07-17 20:15:25.790443296 +0000 UTC m=+5527.099138845
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-149000 -n old-k8s-version-149000
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-149000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-149000 logs -n 25: (1.879638083s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-408472             | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:50 UTC | 17 Jul 23 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-408472                                   | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-711413  | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC |                     |
	|         | default-k8s-diff-port-711413                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-891260             | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-891260                  | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-891260 --memory=2200 --alsologtostderr   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-891260 sudo                              | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p                                                     | disable-driver-mounts-178387 | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | disable-driver-mounts-178387                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-149000             | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-149000                              | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-408472                  | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-408472                                   | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-711413       | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 20:03 UTC |
	|         | default-k8s-diff-port-711413                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-114855            | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 19:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-114855                 | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC | 17 Jul 23 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 19:57:15
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:57:15.731358 1103141 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:57:15.731568 1103141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:57:15.731580 1103141 out.go:309] Setting ErrFile to fd 2...
	I0717 19:57:15.731587 1103141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:57:15.731815 1103141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:57:15.732432 1103141 out.go:303] Setting JSON to false
	I0717 19:57:15.733539 1103141 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16787,"bootTime":1689607049,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:57:15.733642 1103141 start.go:138] virtualization: kvm guest
	I0717 19:57:15.737317 1103141 out.go:177] * [embed-certs-114855] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:57:15.739399 1103141 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:57:15.739429 1103141 notify.go:220] Checking for updates...
	I0717 19:57:15.741380 1103141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:57:15.743518 1103141 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:57:15.745436 1103141 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:57:15.747588 1103141 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:57:15.749399 1103141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:57:15.751806 1103141 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:57:15.752284 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:57:15.752344 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:57:15.767989 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42921
	I0717 19:57:15.768411 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:57:15.769006 1103141 main.go:141] libmachine: Using API Version  1
	I0717 19:57:15.769098 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:57:15.769495 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:57:15.769753 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:57:15.770054 1103141 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:57:15.770369 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:57:15.770414 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:57:15.785632 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40597
	I0717 19:57:15.786193 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:57:15.786746 1103141 main.go:141] libmachine: Using API Version  1
	I0717 19:57:15.786780 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:57:15.787144 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:57:15.787366 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:57:15.827764 1103141 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:57:15.829847 1103141 start.go:298] selected driver: kvm2
	I0717 19:57:15.829881 1103141 start.go:880] validating driver "kvm2" against &{Name:embed-certs-114855 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-11
4855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStrin
g:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:57:15.830064 1103141 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:57:15.830818 1103141 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:57:15.830919 1103141 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:57:15.846540 1103141 install.go:137] /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2 version is 1.30.1
	I0717 19:57:15.846983 1103141 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:57:15.847033 1103141 cni.go:84] Creating CNI manager for ""
	I0717 19:57:15.847067 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:57:15.847081 1103141 start_flags.go:319] config:
	{Name:embed-certs-114855 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-114855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:57:15.847306 1103141 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:57:15.849943 1103141 out.go:177] * Starting control plane node embed-certs-114855 in cluster embed-certs-114855
	I0717 19:57:14.309967 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:15.851794 1103141 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:57:15.851858 1103141 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 19:57:15.851874 1103141 cache.go:57] Caching tarball of preloaded images
	I0717 19:57:15.851987 1103141 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:57:15.852001 1103141 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:57:15.852143 1103141 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/config.json ...
	I0717 19:57:15.852383 1103141 start.go:365] acquiring machines lock for embed-certs-114855: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:57:17.381986 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:23.461901 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:26.533953 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:32.613932 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:35.685977 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:41.765852 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:44.837869 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:50.917965 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:53.921775 1102136 start.go:369] acquired machines lock for "no-preload-408472" in 4m25.126407357s
	I0717 19:57:53.921838 1102136 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:57:53.921845 1102136 fix.go:54] fixHost starting: 
	I0717 19:57:53.922267 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:57:53.922309 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:57:53.937619 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0717 19:57:53.938191 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:57:53.938815 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:57:53.938854 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:57:53.939222 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:57:53.939501 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:57:53.939704 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:57:53.941674 1102136 fix.go:102] recreateIfNeeded on no-preload-408472: state=Stopped err=<nil>
	I0717 19:57:53.941732 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	W0717 19:57:53.941961 1102136 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:57:53.944840 1102136 out.go:177] * Restarting existing kvm2 VM for "no-preload-408472" ...
	I0717 19:57:53.919175 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:57:53.919232 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:57:53.921597 1101908 machine.go:91] provisioned docker machine in 4m37.562634254s
	I0717 19:57:53.921653 1101908 fix.go:56] fixHost completed within 4m37.5908464s
	I0717 19:57:53.921659 1101908 start.go:83] releasing machines lock for "old-k8s-version-149000", held for 4m37.590895645s
	W0717 19:57:53.921680 1101908 start.go:688] error starting host: provision: host is not running
	W0717 19:57:53.921815 1101908 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 19:57:53.921826 1101908 start.go:703] Will try again in 5 seconds ...
	I0717 19:57:53.947202 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Start
	I0717 19:57:53.947561 1102136 main.go:141] libmachine: (no-preload-408472) Ensuring networks are active...
	I0717 19:57:53.948787 1102136 main.go:141] libmachine: (no-preload-408472) Ensuring network default is active
	I0717 19:57:53.949254 1102136 main.go:141] libmachine: (no-preload-408472) Ensuring network mk-no-preload-408472 is active
	I0717 19:57:53.949695 1102136 main.go:141] libmachine: (no-preload-408472) Getting domain xml...
	I0717 19:57:53.950763 1102136 main.go:141] libmachine: (no-preload-408472) Creating domain...
	I0717 19:57:55.256278 1102136 main.go:141] libmachine: (no-preload-408472) Waiting to get IP...
	I0717 19:57:55.257164 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:55.257506 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:55.257619 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:55.257495 1103281 retry.go:31] will retry after 210.861865ms: waiting for machine to come up
	I0717 19:57:55.470210 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:55.470771 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:55.470798 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:55.470699 1103281 retry.go:31] will retry after 348.064579ms: waiting for machine to come up
	I0717 19:57:55.820645 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:55.821335 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:55.821366 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:55.821251 1103281 retry.go:31] will retry after 340.460253ms: waiting for machine to come up
	I0717 19:57:56.163913 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:56.164380 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:56.164412 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:56.164331 1103281 retry.go:31] will retry after 551.813243ms: waiting for machine to come up
	I0717 19:57:56.718505 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:56.719004 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:56.719034 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:56.718953 1103281 retry.go:31] will retry after 640.277548ms: waiting for machine to come up
	I0717 19:57:57.360930 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:57.361456 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:57.361485 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:57.361395 1103281 retry.go:31] will retry after 590.296988ms: waiting for machine to come up
	I0717 19:57:57.953399 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:57.953886 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:57.953913 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:57.953811 1103281 retry.go:31] will retry after 884.386688ms: waiting for machine to come up
	I0717 19:57:58.923546 1101908 start.go:365] acquiring machines lock for old-k8s-version-149000: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:57:58.840158 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:58.840577 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:58.840610 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:58.840529 1103281 retry.go:31] will retry after 1.10470212s: waiting for machine to come up
	I0717 19:57:59.947457 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:59.947973 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:59.948001 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:59.947933 1103281 retry.go:31] will retry after 1.338375271s: waiting for machine to come up
	I0717 19:58:01.288616 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:01.289194 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:01.289226 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:01.289133 1103281 retry.go:31] will retry after 1.633127486s: waiting for machine to come up
	I0717 19:58:02.923621 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:02.924330 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:02.924365 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:02.924253 1103281 retry.go:31] will retry after 2.365924601s: waiting for machine to come up
	I0717 19:58:05.291979 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:05.292487 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:05.292519 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:05.292430 1103281 retry.go:31] will retry after 2.846623941s: waiting for machine to come up
	I0717 19:58:08.142536 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:08.143021 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:08.143050 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:08.142961 1103281 retry.go:31] will retry after 3.495052949s: waiting for machine to come up
	I0717 19:58:11.641858 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:11.642358 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:11.642384 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:11.642302 1103281 retry.go:31] will retry after 5.256158303s: waiting for machine to come up
	I0717 19:58:18.263277 1102415 start.go:369] acquired machines lock for "default-k8s-diff-port-711413" in 4m14.158154198s
	I0717 19:58:18.263342 1102415 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:58:18.263362 1102415 fix.go:54] fixHost starting: 
	I0717 19:58:18.263897 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:58:18.263950 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:58:18.280719 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33059
	I0717 19:58:18.281241 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:58:18.281819 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:58:18.281845 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:58:18.282261 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:58:18.282489 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:18.282657 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:58:18.284625 1102415 fix.go:102] recreateIfNeeded on default-k8s-diff-port-711413: state=Stopped err=<nil>
	I0717 19:58:18.284655 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	W0717 19:58:18.284839 1102415 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:58:18.288135 1102415 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-711413" ...
	I0717 19:58:16.902597 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.903197 1102136 main.go:141] libmachine: (no-preload-408472) Found IP for machine: 192.168.61.65
	I0717 19:58:16.903226 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has current primary IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.903232 1102136 main.go:141] libmachine: (no-preload-408472) Reserving static IP address...
	I0717 19:58:16.903758 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "no-preload-408472", mac: "52:54:00:36:75:ac", ip: "192.168.61.65"} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:16.903794 1102136 main.go:141] libmachine: (no-preload-408472) Reserved static IP address: 192.168.61.65
	I0717 19:58:16.903806 1102136 main.go:141] libmachine: (no-preload-408472) DBG | skip adding static IP to network mk-no-preload-408472 - found existing host DHCP lease matching {name: "no-preload-408472", mac: "52:54:00:36:75:ac", ip: "192.168.61.65"}
	I0717 19:58:16.903820 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Getting to WaitForSSH function...
	I0717 19:58:16.903830 1102136 main.go:141] libmachine: (no-preload-408472) Waiting for SSH to be available...
	I0717 19:58:16.906385 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.906796 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:16.906833 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.906966 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Using SSH client type: external
	I0717 19:58:16.907000 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa (-rw-------)
	I0717 19:58:16.907034 1102136 main.go:141] libmachine: (no-preload-408472) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:58:16.907056 1102136 main.go:141] libmachine: (no-preload-408472) DBG | About to run SSH command:
	I0717 19:58:16.907116 1102136 main.go:141] libmachine: (no-preload-408472) DBG | exit 0
	I0717 19:58:16.998306 1102136 main.go:141] libmachine: (no-preload-408472) DBG | SSH cmd err, output: <nil>: 
	I0717 19:58:16.998744 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetConfigRaw
	I0717 19:58:16.999490 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:17.002697 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.003108 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.003156 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.003405 1102136 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/config.json ...
	I0717 19:58:17.003642 1102136 machine.go:88] provisioning docker machine ...
	I0717 19:58:17.003668 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:17.003989 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetMachineName
	I0717 19:58:17.004208 1102136 buildroot.go:166] provisioning hostname "no-preload-408472"
	I0717 19:58:17.004234 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetMachineName
	I0717 19:58:17.004464 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.007043 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.007337 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.007371 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.007517 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.007730 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.007933 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.008071 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.008245 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:17.008906 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:17.008927 1102136 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-408472 && echo "no-preload-408472" | sudo tee /etc/hostname
	I0717 19:58:17.143779 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-408472
	
	I0717 19:58:17.143816 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.146881 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.147332 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.147384 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.147556 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.147807 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.147990 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.148137 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.148320 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:17.148789 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:17.148811 1102136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-408472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-408472/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-408472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:58:17.279254 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:58:17.279292 1102136 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:58:17.279339 1102136 buildroot.go:174] setting up certificates
	I0717 19:58:17.279375 1102136 provision.go:83] configureAuth start
	I0717 19:58:17.279390 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetMachineName
	I0717 19:58:17.279745 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:17.283125 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.283563 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.283610 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.283837 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.286508 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.286931 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.286975 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.287088 1102136 provision.go:138] copyHostCerts
	I0717 19:58:17.287196 1102136 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:58:17.287210 1102136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:58:17.287299 1102136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:58:17.287430 1102136 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:58:17.287443 1102136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:58:17.287486 1102136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:58:17.287634 1102136 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:58:17.287650 1102136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:58:17.287691 1102136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:58:17.287762 1102136 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.no-preload-408472 san=[192.168.61.65 192.168.61.65 localhost 127.0.0.1 minikube no-preload-408472]
	I0717 19:58:17.492065 1102136 provision.go:172] copyRemoteCerts
	I0717 19:58:17.492172 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:58:17.492209 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.495444 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.495931 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.495971 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.496153 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.496406 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.496605 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.496793 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:58:17.588540 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:58:17.613378 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 19:58:17.638066 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:58:17.662222 1102136 provision.go:86] duration metric: configureAuth took 382.813668ms
	I0717 19:58:17.662267 1102136 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:58:17.662522 1102136 config.go:182] Loaded profile config "no-preload-408472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:58:17.662613 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.665914 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.666415 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.666475 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.666673 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.666934 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.667122 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.667287 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.667466 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:17.667885 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:17.667903 1102136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:58:17.997416 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:58:17.997461 1102136 machine.go:91] provisioned docker machine in 993.802909ms
	I0717 19:58:17.997476 1102136 start.go:300] post-start starting for "no-preload-408472" (driver="kvm2")
	I0717 19:58:17.997490 1102136 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:58:17.997533 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:17.997925 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:58:17.998013 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.000755 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.001185 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.001210 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.001409 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.001682 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.001892 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.002059 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:58:18.093738 1102136 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:58:18.098709 1102136 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:58:18.098744 1102136 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:58:18.098854 1102136 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:58:18.098974 1102136 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:58:18.099098 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:58:18.110195 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:18.135572 1102136 start.go:303] post-start completed in 138.074603ms
	I0717 19:58:18.135628 1102136 fix.go:56] fixHost completed within 24.21376423s
	I0717 19:58:18.135652 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.139033 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.139617 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.139656 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.139847 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.140146 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.140366 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.140612 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.140819 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:18.141265 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:18.141282 1102136 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:58:18.263053 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623898.247474645
	
	I0717 19:58:18.263080 1102136 fix.go:206] guest clock: 1689623898.247474645
	I0717 19:58:18.263096 1102136 fix.go:219] Guest: 2023-07-17 19:58:18.247474645 +0000 UTC Remote: 2023-07-17 19:58:18.135632998 +0000 UTC m=+289.513196741 (delta=111.841647ms)
	I0717 19:58:18.263124 1102136 fix.go:190] guest clock delta is within tolerance: 111.841647ms
	I0717 19:58:18.263132 1102136 start.go:83] releasing machines lock for "no-preload-408472", held for 24.341313825s
	I0717 19:58:18.263184 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.263451 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:18.266352 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.266707 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.266732 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.266920 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.267684 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.267935 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.268033 1102136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:58:18.268095 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.268205 1102136 ssh_runner.go:195] Run: cat /version.json
	I0717 19:58:18.268249 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.270983 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271223 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271324 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.271385 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271494 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.271608 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.271628 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271697 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.271879 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.271895 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.272094 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.272099 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:58:18.272253 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.272419 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	W0717 19:58:18.395775 1102136 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:58:18.395916 1102136 ssh_runner.go:195] Run: systemctl --version
	I0717 19:58:18.403799 1102136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:58:18.557449 1102136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:58:18.564470 1102136 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:58:18.564580 1102136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:58:18.580344 1102136 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:58:18.580386 1102136 start.go:469] detecting cgroup driver to use...
	I0717 19:58:18.580482 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:58:18.595052 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:58:18.608844 1102136 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:58:18.608948 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:58:18.621908 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:58:18.635796 1102136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:58:18.290375 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Start
	I0717 19:58:18.290615 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Ensuring networks are active...
	I0717 19:58:18.291470 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Ensuring network default is active
	I0717 19:58:18.292041 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Ensuring network mk-default-k8s-diff-port-711413 is active
	I0717 19:58:18.292477 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Getting domain xml...
	I0717 19:58:18.293393 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Creating domain...
	I0717 19:58:18.751368 1102136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:58:18.878097 1102136 docker.go:212] disabling docker service ...
	I0717 19:58:18.878186 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:58:18.895094 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:58:18.909958 1102136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:58:19.032014 1102136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:58:19.141917 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:58:19.158474 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:58:19.178688 1102136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:58:19.178767 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.189949 1102136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:58:19.190059 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.201270 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.212458 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.226193 1102136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:58:19.239919 1102136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:58:19.251627 1102136 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:58:19.251711 1102136 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:58:19.268984 1102136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:58:19.281898 1102136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:58:19.390523 1102136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:58:19.599827 1102136 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:58:19.599937 1102136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:58:19.605741 1102136 start.go:537] Will wait 60s for crictl version
	I0717 19:58:19.605810 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:19.610347 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:58:19.653305 1102136 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:58:19.653418 1102136 ssh_runner.go:195] Run: crio --version
	I0717 19:58:19.712418 1102136 ssh_runner.go:195] Run: crio --version
	I0717 19:58:19.773012 1102136 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:58:19.775099 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:19.778530 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:19.779127 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:19.779167 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:19.779477 1102136 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 19:58:19.784321 1102136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:19.797554 1102136 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:58:19.797682 1102136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:58:19.833548 1102136 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:58:19.833590 1102136 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:19.833749 1102136 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:19.833760 1102136 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:19.833787 1102136 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0717 19:58:19.833821 1102136 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:19.835432 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:19.835461 1102136 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:19.835497 1102136 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:19.835492 1102136 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0717 19:58:19.835432 1102136 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:19.835432 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:19.835463 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:19.835436 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.032458 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.032526 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:20.035507 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:20.035509 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:20.041878 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:20.056915 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0717 19:58:20.099112 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:20.119661 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:20.195250 1102136 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0717 19:58:20.195338 1102136 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0717 19:58:20.195384 1102136 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:20.195441 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.195348 1102136 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.195521 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.212109 1102136 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0717 19:58:20.212185 1102136 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:20.212255 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.232021 1102136 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0717 19:58:20.232077 1102136 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:20.232126 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.232224 1102136 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0717 19:58:20.232257 1102136 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:20.232287 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.363363 1102136 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0717 19:58:20.363425 1102136 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:20.363470 1102136 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 19:58:20.363498 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:20.363529 1102136 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:20.363483 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.363579 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.363660 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:20.363569 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.363722 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:20.363783 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:20.368457 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:20.469461 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0717 19:58:20.469647 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 19:58:20.476546 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0717 19:58:20.476613 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:20.476657 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0717 19:58:20.476703 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.7-0
	I0717 19:58:20.476751 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 19:58:20.476824 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0717 19:58:20.476918 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0717 19:58:20.483915 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0717 19:58:20.483949 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.27.3 (exists)
	I0717 19:58:20.483966 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 19:58:20.483970 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0717 19:58:20.484015 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 19:58:20.484030 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 19:58:20.484067 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 19:58:20.532090 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0717 19:58:20.532113 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.7-0 (exists)
	I0717 19:58:20.532194 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 19:58:20.532213 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.27.3 (exists)
	I0717 19:58:20.532304 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:58:19.668342 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting to get IP...
	I0717 19:58:19.669327 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.669868 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.669996 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:19.669860 1103407 retry.go:31] will retry after 270.908859ms: waiting for machine to come up
	I0717 19:58:19.942914 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.943490 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.943524 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:19.943434 1103407 retry.go:31] will retry after 387.572792ms: waiting for machine to come up
	I0717 19:58:20.333302 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.333904 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.333934 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:20.333842 1103407 retry.go:31] will retry after 325.807844ms: waiting for machine to come up
	I0717 19:58:20.661438 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.661890 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.661926 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:20.661828 1103407 retry.go:31] will retry after 492.482292ms: waiting for machine to come up
	I0717 19:58:21.155613 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.156184 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.156212 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:21.156089 1103407 retry.go:31] will retry after 756.388438ms: waiting for machine to come up
	I0717 19:58:21.914212 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.914770 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.914806 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:21.914695 1103407 retry.go:31] will retry after 754.504649ms: waiting for machine to come up
	I0717 19:58:22.670913 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:22.671334 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:22.671369 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:22.671278 1103407 retry.go:31] will retry after 790.272578ms: waiting for machine to come up
	I0717 19:58:23.463657 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:23.464118 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:23.464145 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:23.464042 1103407 retry.go:31] will retry after 1.267289365s: waiting for machine to come up
	I0717 19:58:23.707718 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3: (3.223672376s)
	I0717 19:58:23.707750 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 from cache
	I0717 19:58:23.707788 1102136 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0717 19:58:23.707804 1102136 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3: (3.223748615s)
	I0717 19:58:23.707842 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.27.3 (exists)
	I0717 19:58:23.707856 1102136 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3: (3.223769648s)
	I0717 19:58:23.707862 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0717 19:58:23.707878 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.27.3 (exists)
	I0717 19:58:23.707908 1102136 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.175586566s)
	I0717 19:58:23.707926 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 19:58:24.960652 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.252755334s)
	I0717 19:58:24.960691 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0717 19:58:24.960722 1102136 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.7-0
	I0717 19:58:24.960770 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0
	I0717 19:58:24.733590 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:24.734140 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:24.734176 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:24.734049 1103407 retry.go:31] will retry after 1.733875279s: waiting for machine to come up
	I0717 19:58:26.470148 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:26.470587 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:26.470640 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:26.470522 1103407 retry.go:31] will retry after 1.829632979s: waiting for machine to come up
	I0717 19:58:28.301973 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:28.302506 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:28.302560 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:28.302421 1103407 retry.go:31] will retry after 2.201530837s: waiting for machine to come up
	I0717 19:58:32.118558 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0: (7.157750323s)
	I0717 19:58:32.118606 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 from cache
	I0717 19:58:32.118641 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 19:58:32.118700 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 19:58:33.577369 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3: (1.458638516s)
	I0717 19:58:33.577400 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 from cache
	I0717 19:58:33.577447 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 19:58:33.577595 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 19:58:30.507029 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:30.507586 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:30.507647 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:30.507447 1103407 retry.go:31] will retry after 2.947068676s: waiting for machine to come up
	I0717 19:58:33.456714 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:33.457232 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:33.457261 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:33.457148 1103407 retry.go:31] will retry after 3.074973516s: waiting for machine to come up
	I0717 19:58:37.871095 1103141 start.go:369] acquired machines lock for "embed-certs-114855" in 1m22.018672602s
	I0717 19:58:37.871161 1103141 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:58:37.871175 1103141 fix.go:54] fixHost starting: 
	I0717 19:58:37.871619 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:58:37.871689 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:58:37.889865 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46381
	I0717 19:58:37.890334 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:58:37.891044 1103141 main.go:141] libmachine: Using API Version  1
	I0717 19:58:37.891070 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:58:37.891471 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:58:37.891734 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:58:37.891927 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 19:58:37.893736 1103141 fix.go:102] recreateIfNeeded on embed-certs-114855: state=Stopped err=<nil>
	I0717 19:58:37.893779 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	W0717 19:58:37.893994 1103141 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:58:37.896545 1103141 out.go:177] * Restarting existing kvm2 VM for "embed-certs-114855" ...
	I0717 19:58:35.345141 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3: (1.767506173s)
	I0717 19:58:35.345180 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 from cache
	I0717 19:58:35.345211 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 19:58:35.345273 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 19:58:37.803066 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3: (2.457743173s)
	I0717 19:58:37.803106 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 from cache
	I0717 19:58:37.803139 1102136 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:58:37.803193 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:58:38.559165 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 19:58:38.559222 1102136 cache_images.go:123] Successfully loaded all cached images
	I0717 19:58:38.559231 1102136 cache_images.go:92] LoadImages completed in 18.725611601s
	I0717 19:58:38.559363 1102136 ssh_runner.go:195] Run: crio config
	I0717 19:58:38.630364 1102136 cni.go:84] Creating CNI manager for ""
	I0717 19:58:38.630394 1102136 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:58:38.630421 1102136 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:58:38.630447 1102136 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.65 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-408472 NodeName:no-preload-408472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:58:38.630640 1102136 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-408472"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:58:38.630739 1102136 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-408472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:no-preload-408472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:58:38.630813 1102136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:58:38.643343 1102136 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:58:38.643443 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:58:38.653495 1102136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0717 19:58:36.535628 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.536224 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Found IP for machine: 192.168.72.51
	I0717 19:58:36.536256 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Reserving static IP address...
	I0717 19:58:36.536278 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has current primary IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.536720 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-711413", mac: "52:54:00:7d:d7:a9", ip: "192.168.72.51"} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.536756 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | skip adding static IP to network mk-default-k8s-diff-port-711413 - found existing host DHCP lease matching {name: "default-k8s-diff-port-711413", mac: "52:54:00:7d:d7:a9", ip: "192.168.72.51"}
	I0717 19:58:36.536773 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Reserved static IP address: 192.168.72.51
	I0717 19:58:36.536791 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for SSH to be available...
	I0717 19:58:36.536804 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Getting to WaitForSSH function...
	I0717 19:58:36.540038 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.540593 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.540649 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.540764 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Using SSH client type: external
	I0717 19:58:36.540799 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa (-rw-------)
	I0717 19:58:36.540855 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:58:36.540876 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | About to run SSH command:
	I0717 19:58:36.540895 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | exit 0
	I0717 19:58:36.637774 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | SSH cmd err, output: <nil>: 
	I0717 19:58:36.638200 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetConfigRaw
	I0717 19:58:36.638931 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:36.642048 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.642530 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.642560 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.642850 1102415 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/config.json ...
	I0717 19:58:36.643061 1102415 machine.go:88] provisioning docker machine ...
	I0717 19:58:36.643080 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:36.643344 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetMachineName
	I0717 19:58:36.643516 1102415 buildroot.go:166] provisioning hostname "default-k8s-diff-port-711413"
	I0717 19:58:36.643535 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetMachineName
	I0717 19:58:36.643766 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:36.646810 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.647205 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.647243 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.647582 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:36.647826 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.648082 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.648275 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:36.648470 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:36.648883 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:36.648898 1102415 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-711413 && echo "default-k8s-diff-port-711413" | sudo tee /etc/hostname
	I0717 19:58:36.784478 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-711413
	
	I0717 19:58:36.784524 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:36.787641 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.788065 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.788118 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.788363 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:36.788605 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.788799 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.788942 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:36.789239 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:36.789869 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:36.789916 1102415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-711413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-711413/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-711413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:58:36.923177 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:58:36.923211 1102415 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:58:36.923237 1102415 buildroot.go:174] setting up certificates
	I0717 19:58:36.923248 1102415 provision.go:83] configureAuth start
	I0717 19:58:36.923257 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetMachineName
	I0717 19:58:36.923633 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:36.927049 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.927406 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.927443 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.927641 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:36.930158 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.930705 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.930771 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.930844 1102415 provision.go:138] copyHostCerts
	I0717 19:58:36.930969 1102415 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:58:36.930984 1102415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:58:36.931064 1102415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:58:36.931188 1102415 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:58:36.931201 1102415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:58:36.931235 1102415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:58:36.931315 1102415 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:58:36.931325 1102415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:58:36.931353 1102415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:58:36.931423 1102415 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-711413 san=[192.168.72.51 192.168.72.51 localhost 127.0.0.1 minikube default-k8s-diff-port-711413]
	I0717 19:58:37.043340 1102415 provision.go:172] copyRemoteCerts
	I0717 19:58:37.043444 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:58:37.043487 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.047280 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.047842 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.047879 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.048143 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.048410 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.048648 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.048844 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:58:37.147255 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:58:37.175437 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 19:58:37.202827 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:58:37.231780 1102415 provision.go:86] duration metric: configureAuth took 308.515103ms
	I0717 19:58:37.231818 1102415 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:58:37.232118 1102415 config.go:182] Loaded profile config "default-k8s-diff-port-711413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:58:37.232255 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.235364 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.235964 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.236011 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.236296 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.236533 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.236793 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.236976 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.237175 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:37.237831 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:37.237866 1102415 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:58:37.601591 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:58:37.601631 1102415 machine.go:91] provisioned docker machine in 958.556319ms
	I0717 19:58:37.601644 1102415 start.go:300] post-start starting for "default-k8s-diff-port-711413" (driver="kvm2")
	I0717 19:58:37.601665 1102415 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:58:37.601692 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.602018 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:58:37.602048 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.604964 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.605335 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.605387 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.605486 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.605822 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.606033 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.606224 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:58:37.696316 1102415 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:58:37.701409 1102415 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:58:37.701442 1102415 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:58:37.701579 1102415 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:58:37.701694 1102415 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:58:37.701827 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:58:37.711545 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:37.739525 1102415 start.go:303] post-start completed in 137.838589ms
	I0717 19:58:37.739566 1102415 fix.go:56] fixHost completed within 19.476203721s
	I0717 19:58:37.739599 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.742744 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.743095 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.743127 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.743298 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.743568 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.743768 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.743929 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.744164 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:37.744786 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:37.744809 1102415 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:58:37.870894 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623917.842259641
	
	I0717 19:58:37.870923 1102415 fix.go:206] guest clock: 1689623917.842259641
	I0717 19:58:37.870931 1102415 fix.go:219] Guest: 2023-07-17 19:58:37.842259641 +0000 UTC Remote: 2023-07-17 19:58:37.739572977 +0000 UTC m=+273.789942316 (delta=102.686664ms)
	I0717 19:58:37.870992 1102415 fix.go:190] guest clock delta is within tolerance: 102.686664ms
	I0717 19:58:37.871004 1102415 start.go:83] releasing machines lock for "default-k8s-diff-port-711413", held for 19.607687828s
	I0717 19:58:37.871044 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.871350 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:37.874527 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.874967 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.875029 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.875202 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.875791 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.876007 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.876141 1102415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:58:37.876211 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.876261 1102415 ssh_runner.go:195] Run: cat /version.json
	I0717 19:58:37.876289 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.879243 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.879483 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.879717 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.879752 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.879861 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.880090 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.880098 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.880118 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.880204 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.880335 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.880427 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.880513 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:58:37.880582 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.880714 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	W0717 19:58:37.967909 1102415 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:58:37.968017 1102415 ssh_runner.go:195] Run: systemctl --version
	I0717 19:58:37.997996 1102415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:58:38.148654 1102415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:58:38.156049 1102415 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:58:38.156151 1102415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:58:38.177835 1102415 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:58:38.177866 1102415 start.go:469] detecting cgroup driver to use...
	I0717 19:58:38.177945 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:58:38.196359 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:58:38.209697 1102415 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:58:38.209777 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:58:38.226250 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:58:38.244868 1102415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:58:38.385454 1102415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:58:38.527891 1102415 docker.go:212] disabling docker service ...
	I0717 19:58:38.527973 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:58:38.546083 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:58:38.562767 1102415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:58:38.702706 1102415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:58:38.828923 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:58:38.845137 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:58:38.866427 1102415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:58:38.866511 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.878067 1102415 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:58:38.878157 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.892494 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.905822 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.917786 1102415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:58:38.931418 1102415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:58:38.945972 1102415 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:58:38.946039 1102415 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:58:38.964498 1102415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:58:38.977323 1102415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:58:39.098593 1102415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:58:39.320821 1102415 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:58:39.320909 1102415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:58:39.327195 1102415 start.go:537] Will wait 60s for crictl version
	I0717 19:58:39.327285 1102415 ssh_runner.go:195] Run: which crictl
	I0717 19:58:39.333466 1102415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:58:39.372542 1102415 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:58:39.372643 1102415 ssh_runner.go:195] Run: crio --version
	I0717 19:58:39.419356 1102415 ssh_runner.go:195] Run: crio --version
	I0717 19:58:39.467405 1102415 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:58:37.898938 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Start
	I0717 19:58:37.899185 1103141 main.go:141] libmachine: (embed-certs-114855) Ensuring networks are active...
	I0717 19:58:37.900229 1103141 main.go:141] libmachine: (embed-certs-114855) Ensuring network default is active
	I0717 19:58:37.900690 1103141 main.go:141] libmachine: (embed-certs-114855) Ensuring network mk-embed-certs-114855 is active
	I0717 19:58:37.901444 1103141 main.go:141] libmachine: (embed-certs-114855) Getting domain xml...
	I0717 19:58:37.902311 1103141 main.go:141] libmachine: (embed-certs-114855) Creating domain...
	I0717 19:58:39.293109 1103141 main.go:141] libmachine: (embed-certs-114855) Waiting to get IP...
	I0717 19:58:39.294286 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:39.294784 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:39.294877 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:39.294761 1103558 retry.go:31] will retry after 201.93591ms: waiting for machine to come up
	I0717 19:58:39.498428 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:39.499066 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:39.499123 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:39.498979 1103558 retry.go:31] will retry after 321.702493ms: waiting for machine to come up
	I0717 19:58:39.822708 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:39.823258 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:39.823287 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:39.823212 1103558 retry.go:31] will retry after 477.114259ms: waiting for machine to come up
	I0717 19:58:40.302080 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:40.302727 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:40.302755 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:40.302668 1103558 retry.go:31] will retry after 554.321931ms: waiting for machine to come up
	I0717 19:58:38.674825 1102136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:58:38.697168 1102136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0717 19:58:38.719030 1102136 ssh_runner.go:195] Run: grep 192.168.61.65	control-plane.minikube.internal$ /etc/hosts
	I0717 19:58:38.724312 1102136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:38.742726 1102136 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472 for IP: 192.168.61.65
	I0717 19:58:38.742830 1102136 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:58:38.743029 1102136 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:58:38.743082 1102136 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:58:38.743238 1102136 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.key
	I0717 19:58:38.743316 1102136 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/apiserver.key.71349e66
	I0717 19:58:38.743370 1102136 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/proxy-client.key
	I0717 19:58:38.743527 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:58:38.743579 1102136 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:58:38.743597 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:58:38.743631 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:58:38.743667 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:58:38.743699 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:58:38.743759 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:38.744668 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:58:38.773602 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:58:38.799675 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:58:38.826050 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:58:38.856973 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:58:38.886610 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:58:38.916475 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:58:38.945986 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:58:38.973415 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:58:39.002193 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:58:39.030265 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:58:39.062896 1102136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:58:39.082877 1102136 ssh_runner.go:195] Run: openssl version
	I0717 19:58:39.090088 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:58:39.104372 1102136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:58:39.110934 1102136 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:58:39.111023 1102136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:58:39.117702 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:58:39.132094 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:58:39.149143 1102136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:58:39.155238 1102136 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:58:39.155359 1102136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:58:39.164149 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:58:39.178830 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:58:39.192868 1102136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:39.199561 1102136 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:39.199663 1102136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:39.208054 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:58:39.220203 1102136 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:58:39.228030 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:58:39.235220 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:58:39.243450 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:58:39.250709 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:58:39.260912 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:58:39.269318 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:58:39.277511 1102136 kubeadm.go:404] StartCluster: {Name:no-preload-408472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-408472 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.65 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:58:39.277701 1102136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:58:39.277789 1102136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:39.317225 1102136 cri.go:89] found id: ""
	I0717 19:58:39.317321 1102136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:58:39.330240 1102136 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:58:39.330274 1102136 kubeadm.go:636] restartCluster start
	I0717 19:58:39.330351 1102136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:58:39.343994 1102136 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:39.345762 1102136 kubeconfig.go:92] found "no-preload-408472" server: "https://192.168.61.65:8443"
	I0717 19:58:39.350027 1102136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:58:39.360965 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:39.361039 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:39.375103 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:39.875778 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:39.875891 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:39.892869 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:40.375344 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:40.375421 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:40.392992 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:40.875474 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:40.875590 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:40.892666 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:41.375224 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:41.375335 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:41.393833 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:41.875377 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:41.875466 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:41.893226 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:42.375846 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:42.375957 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:42.390397 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:42.876105 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:42.876220 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:42.889082 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:43.375654 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:43.375774 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:43.392598 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:39.469543 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:39.472792 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:39.473333 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:39.473386 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:39.473640 1102415 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:58:39.478276 1102415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:39.491427 1102415 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:58:39.491514 1102415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:58:39.527759 1102415 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:58:39.527856 1102415 ssh_runner.go:195] Run: which lz4
	I0717 19:58:39.532935 1102415 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:58:39.537733 1102415 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:58:39.537785 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 19:58:41.480847 1102415 crio.go:444] Took 1.947975 seconds to copy over tarball
	I0717 19:58:41.480932 1102415 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:58:40.858380 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:40.858925 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:40.858970 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:40.858865 1103558 retry.go:31] will retry after 616.432145ms: waiting for machine to come up
	I0717 19:58:41.476868 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:41.477399 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:41.477434 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:41.477348 1103558 retry.go:31] will retry after 780.737319ms: waiting for machine to come up
	I0717 19:58:42.259853 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:42.260278 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:42.260310 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:42.260216 1103558 retry.go:31] will retry after 858.918849ms: waiting for machine to come up
	I0717 19:58:43.120599 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:43.121211 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:43.121247 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:43.121155 1103558 retry.go:31] will retry after 1.359881947s: waiting for machine to come up
	I0717 19:58:44.482733 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:44.483173 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:44.483203 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:44.483095 1103558 retry.go:31] will retry after 1.298020016s: waiting for machine to come up
	I0717 19:58:43.875260 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:43.875367 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:43.892010 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:44.376275 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:44.376378 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:44.394725 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:44.875258 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:44.875377 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:44.890500 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.376203 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.376337 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.392119 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.875466 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.875573 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.888488 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.376141 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.376288 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.391072 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.875635 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.875797 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.895087 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.375551 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.375653 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.392620 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.875197 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.875340 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.887934 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:48.375469 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.375588 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.392548 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:44.570404 1102415 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.089433908s)
	I0717 19:58:44.570451 1102415 crio.go:451] Took 3.089562 seconds to extract the tarball
	I0717 19:58:44.570465 1102415 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:58:44.615062 1102415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:58:44.660353 1102415 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:58:44.660385 1102415 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:58:44.660468 1102415 ssh_runner.go:195] Run: crio config
	I0717 19:58:44.726880 1102415 cni.go:84] Creating CNI manager for ""
	I0717 19:58:44.726915 1102415 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:58:44.726946 1102415 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:58:44.726973 1102415 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.51 APIServerPort:8444 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-711413 NodeName:default-k8s-diff-port-711413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:58:44.727207 1102415 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.51
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-711413"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:58:44.727340 1102415 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-711413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-711413 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0717 19:58:44.727430 1102415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:58:44.740398 1102415 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:58:44.740509 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:58:44.751288 1102415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0717 19:58:44.769779 1102415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:58:44.788216 1102415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0717 19:58:44.808085 1102415 ssh_runner.go:195] Run: grep 192.168.72.51	control-plane.minikube.internal$ /etc/hosts
	I0717 19:58:44.812829 1102415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:44.826074 1102415 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413 for IP: 192.168.72.51
	I0717 19:58:44.826123 1102415 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:58:44.826373 1102415 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:58:44.826440 1102415 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:58:44.826543 1102415 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.key
	I0717 19:58:44.826629 1102415 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/apiserver.key.f6db28d6
	I0717 19:58:44.826697 1102415 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/proxy-client.key
	I0717 19:58:44.826855 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:58:44.826902 1102415 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:58:44.826915 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:58:44.826953 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:58:44.826988 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:58:44.827026 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:58:44.827091 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:44.828031 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:58:44.856357 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:58:44.884042 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:58:44.915279 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:58:44.945170 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:58:44.974151 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:58:45.000387 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:58:45.027617 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:58:45.054305 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:58:45.080828 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:58:45.107437 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:58:45.135588 1102415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:58:45.155297 1102415 ssh_runner.go:195] Run: openssl version
	I0717 19:58:45.162096 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:58:45.175077 1102415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:58:45.180966 1102415 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:58:45.181050 1102415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:58:45.187351 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:58:45.199795 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:58:45.214273 1102415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:45.220184 1102415 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:45.220269 1102415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:45.227207 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:58:45.239921 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:58:45.252978 1102415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:58:45.259164 1102415 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:58:45.259257 1102415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:58:45.266134 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:58:45.281302 1102415 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:58:45.287179 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:58:45.294860 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:58:45.302336 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:58:45.309621 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:58:45.316590 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:58:45.323564 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:58:45.330904 1102415 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-711413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port
-711413 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.51 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:58:45.331050 1102415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:58:45.331115 1102415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:45.368522 1102415 cri.go:89] found id: ""
	I0717 19:58:45.368606 1102415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:58:45.380610 1102415 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:58:45.380640 1102415 kubeadm.go:636] restartCluster start
	I0717 19:58:45.380711 1102415 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:58:45.391395 1102415 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.392845 1102415 kubeconfig.go:92] found "default-k8s-diff-port-711413" server: "https://192.168.72.51:8444"
	I0717 19:58:45.395718 1102415 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:58:45.405869 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.405954 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.417987 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.918789 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.918924 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.935620 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.418786 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.418918 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.435879 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.918441 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.918570 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.934753 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.418315 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.418429 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.434411 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.918984 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.919143 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.930556 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:48.418827 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.418915 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.430779 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:48.918288 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.918395 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.929830 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.782651 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:45.853667 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:45.853691 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:45.783094 1103558 retry.go:31] will retry after 2.002921571s: waiting for machine to come up
	I0717 19:58:47.788455 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:47.788965 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:47.788995 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:47.788914 1103558 retry.go:31] will retry after 2.108533646s: waiting for machine to come up
	I0717 19:58:49.899541 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:49.900028 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:49.900073 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:49.899974 1103558 retry.go:31] will retry after 3.529635748s: waiting for machine to come up
	I0717 19:58:48.875708 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.875803 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.893686 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:49.362030 1102136 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:58:49.362079 1102136 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:58:49.362096 1102136 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:58:49.362166 1102136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:49.405900 1102136 cri.go:89] found id: ""
	I0717 19:58:49.405997 1102136 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:58:49.429666 1102136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:58:49.440867 1102136 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:58:49.440938 1102136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:49.454993 1102136 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:49.455023 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:49.606548 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.568083 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.782373 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.895178 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.999236 1102136 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:58:50.999321 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:51.519969 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:52.019769 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:52.519618 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:53.020330 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:53.519378 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:53.549727 1102136 api_server.go:72] duration metric: took 2.550491567s to wait for apiserver process to appear ...
	I0717 19:58:53.549757 1102136 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:58:53.549778 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:49.418724 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:49.418839 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:49.431867 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:49.918433 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:49.918602 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:49.933324 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:50.418991 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:50.419113 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:50.433912 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:50.919128 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:50.919228 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:50.934905 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:51.418418 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:51.418557 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:51.430640 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:51.918136 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:51.918248 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:51.933751 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:52.418277 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:52.418388 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:52.434907 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:52.918570 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:52.918702 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:52.933426 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:53.418734 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:53.418828 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:53.431710 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:53.918381 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:53.918502 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:53.930053 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:53.431544 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:53.432055 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:53.432087 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:53.431995 1103558 retry.go:31] will retry after 3.133721334s: waiting for machine to come up
	I0717 19:58:57.990532 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:58:57.990581 1102136 api_server.go:103] status: https://192.168.61.65:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:58:58.491387 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:58.501594 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:58:58.501636 1102136 api_server.go:103] status: https://192.168.61.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:58:54.418156 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:54.418290 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:54.430262 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:54.918831 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:54.918933 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:54.930380 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:55.406385 1102415 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:58:55.406432 1102415 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:58:55.406451 1102415 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:58:55.406530 1102415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:55.444364 1102415 cri.go:89] found id: ""
	I0717 19:58:55.444447 1102415 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:58:55.460367 1102415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:58:55.472374 1102415 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:58:55.472469 1102415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:55.482078 1102415 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:55.482121 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:55.630428 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.221310 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.460424 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.570707 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.691954 1102415 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:58:56.692053 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:57.209115 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:57.708801 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:58.209204 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:58.709268 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:58.991630 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:58.999253 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:58:58.999295 1102136 api_server.go:103] status: https://192.168.61.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:58:59.491062 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:59.498441 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 200:
	ok
	I0717 19:58:59.514314 1102136 api_server.go:141] control plane version: v1.27.3
	I0717 19:58:59.514353 1102136 api_server.go:131] duration metric: took 5.964587051s to wait for apiserver health ...
	I0717 19:58:59.514368 1102136 cni.go:84] Creating CNI manager for ""
	I0717 19:58:59.514403 1102136 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:58:59.516809 1102136 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:58:56.567585 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:56.568167 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:56.568203 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:56.568069 1103558 retry.go:31] will retry after 4.627498539s: waiting for machine to come up
	I0717 19:58:59.518908 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:58:59.549246 1102136 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:58:59.598652 1102136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:58:59.614418 1102136 system_pods.go:59] 8 kube-system pods found
	I0717 19:58:59.614482 1102136 system_pods.go:61] "coredns-5d78c9869d-9mxdj" [fbff09fd-436d-4208-9187-b6312aa1c223] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:58:59.614506 1102136 system_pods.go:61] "etcd-no-preload-408472" [7125f19d-c1ed-4b1f-99be-207bbf5d8c70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:58:59.614519 1102136 system_pods.go:61] "kube-apiserver-no-preload-408472" [0e54eaed-daad-434a-ad3b-96f7fb924099] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:58:59.614529 1102136 system_pods.go:61] "kube-controller-manager-no-preload-408472" [38ee8079-8142-45a9-8d5a-4abbc5c8bb3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:58:59.614547 1102136 system_pods.go:61] "kube-proxy-cntdn" [8653567b-abf9-468c-a030-45fc53fa0cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 19:58:59.614558 1102136 system_pods.go:61] "kube-scheduler-no-preload-408472" [e51560a1-c1b0-407c-8635-df512bd033b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:58:59.614575 1102136 system_pods.go:61] "metrics-server-74d5c6b9c-hnngh" [dfff837e-dbba-4795-935d-9562d2744169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:58:59.614637 1102136 system_pods.go:61] "storage-provisioner" [1aefd8ef-dec9-4e37-8648-8e5a62622cd3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 19:58:59.614658 1102136 system_pods.go:74] duration metric: took 15.975122ms to wait for pod list to return data ...
	I0717 19:58:59.614669 1102136 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:58:59.621132 1102136 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:58:59.621181 1102136 node_conditions.go:123] node cpu capacity is 2
	I0717 19:58:59.621197 1102136 node_conditions.go:105] duration metric: took 6.519635ms to run NodePressure ...
	I0717 19:58:59.621224 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:59.909662 1102136 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:58:59.915153 1102136 kubeadm.go:787] kubelet initialised
	I0717 19:58:59.915190 1102136 kubeadm.go:788] duration metric: took 5.491139ms waiting for restarted kubelet to initialise ...
	I0717 19:58:59.915201 1102136 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:58:59.925196 1102136 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	I0717 19:58:59.934681 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.934715 1102136 pod_ready.go:81] duration metric: took 9.478384ms waiting for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	E0717 19:58:59.934728 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.934742 1102136 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:58:59.949704 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "etcd-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.949744 1102136 pod_ready.go:81] duration metric: took 14.992167ms waiting for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:58:59.949757 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "etcd-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.949766 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:58:59.958029 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-apiserver-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.958083 1102136 pod_ready.go:81] duration metric: took 8.306713ms waiting for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:58:59.958096 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-apiserver-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.958110 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:00.003638 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.003689 1102136 pod_ready.go:81] duration metric: took 45.565817ms waiting for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:00.003702 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.003714 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:00.403384 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-proxy-cntdn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.403421 1102136 pod_ready.go:81] duration metric: took 399.697327ms waiting for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:00.403431 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-proxy-cntdn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.403440 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:00.803159 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-scheduler-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.803192 1102136 pod_ready.go:81] duration metric: took 399.744356ms waiting for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:00.803205 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-scheduler-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.803217 1102136 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:01.206222 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:01.206247 1102136 pod_ready.go:81] duration metric: took 403.0216ms waiting for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:01.206256 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:01.206271 1102136 pod_ready.go:38] duration metric: took 1.291054316s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:01.206293 1102136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:59:01.225481 1102136 ops.go:34] apiserver oom_adj: -16
	I0717 19:59:01.225516 1102136 kubeadm.go:640] restartCluster took 21.895234291s
	I0717 19:59:01.225528 1102136 kubeadm.go:406] StartCluster complete in 21.948029137s
	I0717 19:59:01.225551 1102136 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:01.225672 1102136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:59:01.228531 1102136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:01.228913 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:59:01.229088 1102136 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:59:01.229192 1102136 config.go:182] Loaded profile config "no-preload-408472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:59:01.229244 1102136 addons.go:69] Setting metrics-server=true in profile "no-preload-408472"
	I0717 19:59:01.229249 1102136 addons.go:69] Setting default-storageclass=true in profile "no-preload-408472"
	I0717 19:59:01.229280 1102136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-408472"
	I0717 19:59:01.229299 1102136 addons.go:231] Setting addon metrics-server=true in "no-preload-408472"
	W0717 19:59:01.229307 1102136 addons.go:240] addon metrics-server should already be in state true
	I0717 19:59:01.229241 1102136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-408472"
	I0717 19:59:01.229353 1102136 addons.go:231] Setting addon storage-provisioner=true in "no-preload-408472"
	W0717 19:59:01.229366 1102136 addons.go:240] addon storage-provisioner should already be in state true
	I0717 19:59:01.229440 1102136 host.go:66] Checking if "no-preload-408472" exists ...
	I0717 19:59:01.229447 1102136 host.go:66] Checking if "no-preload-408472" exists ...
	I0717 19:59:01.229764 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.229804 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.229833 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.229854 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.229897 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.229943 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.235540 1102136 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-408472" context rescaled to 1 replicas
	I0717 19:59:01.235641 1102136 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.65 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:59:01.239320 1102136 out.go:177] * Verifying Kubernetes components...
	I0717 19:59:01.241167 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:59:01.247222 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0717 19:59:01.247751 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.248409 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.248438 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.248825 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.249141 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.249823 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
	I0717 19:59:01.249829 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34569
	I0717 19:59:01.250716 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.250724 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.251383 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.251409 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.251591 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.251612 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.252011 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.252078 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.252646 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.252679 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.252688 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.252700 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.270584 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I0717 19:59:01.270664 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40173
	I0717 19:59:01.271057 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.271170 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.271634 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.271656 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.271782 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.271807 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.272018 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.272158 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.272237 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.272362 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.274521 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:59:01.274525 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:59:01.277458 1102136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:59:01.279611 1102136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:02.603147 1101908 start.go:369] acquired machines lock for "old-k8s-version-149000" in 1m3.679538618s
	I0717 19:59:02.603207 1101908 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:59:02.603219 1101908 fix.go:54] fixHost starting: 
	I0717 19:59:02.603691 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:02.603736 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:02.625522 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46275
	I0717 19:59:02.626230 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:02.626836 1101908 main.go:141] libmachine: Using API Version  1
	I0717 19:59:02.626876 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:02.627223 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:02.627395 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:02.627513 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 19:59:02.629627 1101908 fix.go:102] recreateIfNeeded on old-k8s-version-149000: state=Stopped err=<nil>
	I0717 19:59:02.629669 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	W0717 19:59:02.629894 1101908 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:59:02.632584 1101908 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-149000" ...
	I0717 19:59:01.279643 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:59:01.281507 1102136 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:01.281513 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:59:01.281520 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:59:01.281545 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:59:01.281545 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:59:01.286403 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.286708 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.286766 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:59:01.286801 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.287001 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:59:01.287264 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:59:01.287523 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:59:01.287525 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:59:01.287606 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.287736 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:59:01.287791 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:59:01.288610 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:59:01.288821 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:59:01.288982 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:59:01.291242 1102136 addons.go:231] Setting addon default-storageclass=true in "no-preload-408472"
	W0717 19:59:01.291259 1102136 addons.go:240] addon default-storageclass should already be in state true
	I0717 19:59:01.291287 1102136 host.go:66] Checking if "no-preload-408472" exists ...
	I0717 19:59:01.291542 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.291569 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.309690 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43629
	I0717 19:59:01.310234 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.310915 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.310944 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.311356 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.311903 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.311953 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.350859 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0717 19:59:01.351342 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.351922 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.351950 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.352334 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.352512 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.354421 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:59:01.354815 1102136 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:01.354832 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:59:01.354853 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:59:01.358180 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.358632 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:59:01.358651 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.358833 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:59:01.359049 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:59:01.359435 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:59:01.359582 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:59:01.510575 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:59:01.510598 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:59:01.534331 1102136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:01.545224 1102136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:01.582904 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:59:01.582945 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:59:01.645312 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:01.645342 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:59:01.715240 1102136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:01.746252 1102136 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:59:01.746249 1102136 node_ready.go:35] waiting up to 6m0s for node "no-preload-408472" to be "Ready" ...
	I0717 19:58:59.208473 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:59.241367 1102415 api_server.go:72] duration metric: took 2.549409381s to wait for apiserver process to appear ...
	I0717 19:58:59.241403 1102415 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:58:59.241432 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:03.909722 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:03.909763 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:03.702857 1102136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.168474279s)
	I0717 19:59:03.702921 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:03.702938 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:03.703307 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:03.703331 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:03.703343 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:03.703353 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:03.703705 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:03.703735 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:03.703753 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:03.703766 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:03.705061 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Closing plugin on server side
	I0717 19:59:03.705164 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:03.705187 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:03.793171 1102136 node_ready.go:58] node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:04.294821 1102136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.749544143s)
	I0717 19:59:04.294904 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.294922 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.295362 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.295380 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.295391 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.295403 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.295470 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Closing plugin on server side
	I0717 19:59:04.295674 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.295703 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.349340 1102136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.634046821s)
	I0717 19:59:04.349410 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.349428 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.349817 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.349837 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.349848 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.349858 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.349864 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Closing plugin on server side
	I0717 19:59:04.350081 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.350097 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.350116 1102136 addons.go:467] Verifying addon metrics-server=true in "no-preload-408472"
	I0717 19:59:04.353040 1102136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 19:59:01.198818 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.199367 1103141 main.go:141] libmachine: (embed-certs-114855) Found IP for machine: 192.168.39.213
	I0717 19:59:01.199394 1103141 main.go:141] libmachine: (embed-certs-114855) Reserving static IP address...
	I0717 19:59:01.199415 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has current primary IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.199879 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "embed-certs-114855", mac: "52:54:00:d6:57:9a", ip: "192.168.39.213"} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.199916 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | skip adding static IP to network mk-embed-certs-114855 - found existing host DHCP lease matching {name: "embed-certs-114855", mac: "52:54:00:d6:57:9a", ip: "192.168.39.213"}
	I0717 19:59:01.199934 1103141 main.go:141] libmachine: (embed-certs-114855) Reserved static IP address: 192.168.39.213
	I0717 19:59:01.199952 1103141 main.go:141] libmachine: (embed-certs-114855) Waiting for SSH to be available...
	I0717 19:59:01.199960 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Getting to WaitForSSH function...
	I0717 19:59:01.202401 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.202876 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.202910 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.203075 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Using SSH client type: external
	I0717 19:59:01.203121 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa (-rw-------)
	I0717 19:59:01.203172 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:59:01.203195 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | About to run SSH command:
	I0717 19:59:01.203208 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | exit 0
	I0717 19:59:01.298366 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | SSH cmd err, output: <nil>: 
	I0717 19:59:01.298876 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetConfigRaw
	I0717 19:59:01.299753 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:01.303356 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.304237 1103141 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/config.json ...
	I0717 19:59:01.304526 1103141 machine.go:88] provisioning docker machine ...
	I0717 19:59:01.304569 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:01.304668 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.304694 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.304847 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetMachineName
	I0717 19:59:01.305079 1103141 buildroot.go:166] provisioning hostname "embed-certs-114855"
	I0717 19:59:01.305103 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetMachineName
	I0717 19:59:01.305324 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.308214 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.308591 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.308630 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.308805 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.309016 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.309195 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.309346 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.309591 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:01.310205 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:01.310227 1103141 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-114855 && echo "embed-certs-114855" | sudo tee /etc/hostname
	I0717 19:59:01.453113 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-114855
	
	I0717 19:59:01.453149 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.456502 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.456918 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.456981 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.457107 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.457291 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.457514 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.457711 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.457923 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:01.458567 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:01.458597 1103141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-114855' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-114855/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-114855' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:59:01.599062 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:59:01.599112 1103141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:59:01.599143 1103141 buildroot.go:174] setting up certificates
	I0717 19:59:01.599161 1103141 provision.go:83] configureAuth start
	I0717 19:59:01.599194 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetMachineName
	I0717 19:59:01.599579 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:01.602649 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.603014 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.603050 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.603218 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.606042 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.606485 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.606531 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.606679 1103141 provision.go:138] copyHostCerts
	I0717 19:59:01.606754 1103141 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:59:01.606767 1103141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:59:01.606839 1103141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:59:01.607009 1103141 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:59:01.607025 1103141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:59:01.607061 1103141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:59:01.607158 1103141 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:59:01.607174 1103141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:59:01.607204 1103141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:59:01.607298 1103141 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.embed-certs-114855 san=[192.168.39.213 192.168.39.213 localhost 127.0.0.1 minikube embed-certs-114855]
	I0717 19:59:01.721082 1103141 provision.go:172] copyRemoteCerts
	I0717 19:59:01.721179 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:59:01.721223 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.724636 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.725093 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.725127 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.725418 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.725708 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.725896 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.726056 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 19:59:01.826710 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 19:59:01.861153 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:59:01.889779 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:59:01.919893 1103141 provision.go:86] duration metric: configureAuth took 320.712718ms
	I0717 19:59:01.919929 1103141 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:59:01.920192 1103141 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:59:01.920283 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.923585 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.926174 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.926264 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.926897 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.927167 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.927365 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.927512 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.927712 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:01.928326 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:01.928350 1103141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:59:02.302372 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:59:02.302427 1103141 machine.go:91] provisioned docker machine in 997.853301ms
	I0717 19:59:02.302441 1103141 start.go:300] post-start starting for "embed-certs-114855" (driver="kvm2")
	I0717 19:59:02.302455 1103141 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:59:02.302487 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.302859 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:59:02.302900 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.305978 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.306544 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.306626 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.306769 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.306996 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.307231 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.307403 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 19:59:02.408835 1103141 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:59:02.415119 1103141 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:59:02.415157 1103141 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:59:02.415256 1103141 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:59:02.415444 1103141 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:59:02.415570 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:59:02.430800 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:02.465311 1103141 start.go:303] post-start completed in 162.851156ms
	I0717 19:59:02.465347 1103141 fix.go:56] fixHost completed within 24.594172049s
	I0717 19:59:02.465375 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.468945 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.469406 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.469443 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.469704 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.469972 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.470166 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.470301 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.470501 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:02.471120 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:02.471159 1103141 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:59:02.602921 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623942.546317761
	
	I0717 19:59:02.602957 1103141 fix.go:206] guest clock: 1689623942.546317761
	I0717 19:59:02.602970 1103141 fix.go:219] Guest: 2023-07-17 19:59:02.546317761 +0000 UTC Remote: 2023-07-17 19:59:02.465351333 +0000 UTC m=+106.772168927 (delta=80.966428ms)
	I0717 19:59:02.603036 1103141 fix.go:190] guest clock delta is within tolerance: 80.966428ms
	I0717 19:59:02.603053 1103141 start.go:83] releasing machines lock for "embed-certs-114855", held for 24.731922082s
	I0717 19:59:02.604022 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.604447 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:02.608397 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.608991 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.609030 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.609308 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.610102 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.610386 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.610634 1103141 ssh_runner.go:195] Run: cat /version.json
	I0717 19:59:02.610677 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.611009 1103141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:59:02.611106 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.614739 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.615121 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.615253 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.616278 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.616386 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.616802 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.616829 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.617030 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.617096 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.617395 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.617442 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.617597 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.617826 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 19:59:02.618522 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	W0717 19:59:02.745192 1103141 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:59:02.745275 1103141 ssh_runner.go:195] Run: systemctl --version
	I0717 19:59:02.752196 1103141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:59:02.903288 1103141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:59:02.911818 1103141 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:59:02.911917 1103141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:59:02.933786 1103141 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:59:02.933883 1103141 start.go:469] detecting cgroup driver to use...
	I0717 19:59:02.934004 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:59:02.955263 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:59:02.974997 1103141 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:59:02.975077 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:59:02.994203 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:59:03.014446 1103141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:59:03.198307 1103141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:59:03.397392 1103141 docker.go:212] disabling docker service ...
	I0717 19:59:03.397591 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:59:03.418509 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:59:03.437373 1103141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:59:03.613508 1103141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:59:03.739647 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:59:03.754406 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:59:03.777929 1103141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:59:03.778091 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.790606 1103141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:59:03.790721 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.804187 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.817347 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.828813 1103141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:59:03.840430 1103141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:59:03.850240 1103141 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:59:03.850319 1103141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:59:03.865894 1103141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:59:03.882258 1103141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:59:04.017800 1103141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:59:04.248761 1103141 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:59:04.248842 1103141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:59:04.257893 1103141 start.go:537] Will wait 60s for crictl version
	I0717 19:59:04.257984 1103141 ssh_runner.go:195] Run: which crictl
	I0717 19:59:04.264221 1103141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:59:04.305766 1103141 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:59:04.305851 1103141 ssh_runner.go:195] Run: crio --version
	I0717 19:59:04.375479 1103141 ssh_runner.go:195] Run: crio --version
	I0717 19:59:04.436461 1103141 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:59:04.438378 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:04.442194 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:04.442754 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:04.442792 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:04.443221 1103141 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:59:04.448534 1103141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:04.465868 1103141 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:59:04.465946 1103141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:04.502130 1103141 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:59:04.502219 1103141 ssh_runner.go:195] Run: which lz4
	I0717 19:59:04.507394 1103141 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:59:04.512404 1103141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:59:04.512452 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 19:59:04.409929 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:04.419102 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:04.419138 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:04.910761 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:04.919844 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:04.919898 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:05.410298 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:05.424961 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:05.425002 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:05.910377 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:05.924698 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 200:
	ok
	I0717 19:59:05.949272 1102415 api_server.go:141] control plane version: v1.27.3
	I0717 19:59:05.949308 1102415 api_server.go:131] duration metric: took 6.707896837s to wait for apiserver health ...
	I0717 19:59:05.949321 1102415 cni.go:84] Creating CNI manager for ""
	I0717 19:59:05.949334 1102415 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:05.952250 1102415 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:59:02.634580 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Start
	I0717 19:59:02.635005 1101908 main.go:141] libmachine: (old-k8s-version-149000) Ensuring networks are active...
	I0717 19:59:02.635919 1101908 main.go:141] libmachine: (old-k8s-version-149000) Ensuring network default is active
	I0717 19:59:02.636328 1101908 main.go:141] libmachine: (old-k8s-version-149000) Ensuring network mk-old-k8s-version-149000 is active
	I0717 19:59:02.637168 1101908 main.go:141] libmachine: (old-k8s-version-149000) Getting domain xml...
	I0717 19:59:02.638177 1101908 main.go:141] libmachine: (old-k8s-version-149000) Creating domain...
	I0717 19:59:04.249328 1101908 main.go:141] libmachine: (old-k8s-version-149000) Waiting to get IP...
	I0717 19:59:04.250286 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:04.250925 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:04.251047 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:04.250909 1103733 retry.go:31] will retry after 305.194032ms: waiting for machine to come up
	I0717 19:59:04.558456 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:04.559354 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:04.559387 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:04.559290 1103733 retry.go:31] will retry after 338.882261ms: waiting for machine to come up
	I0717 19:59:04.900152 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:04.900673 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:04.900700 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:04.900616 1103733 retry.go:31] will retry after 334.664525ms: waiting for machine to come up
	I0717 19:59:05.236557 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:05.237252 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:05.237280 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:05.237121 1103733 retry.go:31] will retry after 410.314805ms: waiting for machine to come up
	I0717 19:59:05.648936 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:05.649630 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:05.649665 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:05.649572 1103733 retry.go:31] will retry after 482.724985ms: waiting for machine to come up
	I0717 19:59:06.135159 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:06.135923 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:06.135961 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:06.135851 1103733 retry.go:31] will retry after 646.078047ms: waiting for machine to come up
	I0717 19:59:06.783788 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:06.784327 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:06.784352 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:06.784239 1103733 retry.go:31] will retry after 1.176519187s: waiting for machine to come up
	I0717 19:59:05.954319 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:59:06.005185 1102415 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:59:06.070856 1102415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:59:06.086358 1102415 system_pods.go:59] 8 kube-system pods found
	I0717 19:59:06.086429 1102415 system_pods.go:61] "coredns-5d78c9869d-rjqsv" [f27e2de9-9849-40e2-b6dc-1ee27537b1e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:59:06.086448 1102415 system_pods.go:61] "etcd-default-k8s-diff-port-711413" [15f74e3f-61a1-4464-bbff-d336f6df4b6e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:59:06.086462 1102415 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-711413" [c164ab32-6a1d-4079-9b58-7da96eabc60e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:59:06.086481 1102415 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-711413" [7dc149c8-be03-4fd6-b945-c63e95a28470] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:59:06.086498 1102415 system_pods.go:61] "kube-proxy-9qfpg" [ecb84bb9-57a2-4a42-8104-b792d38479ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 19:59:06.086513 1102415 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-711413" [f2cb374f-674e-4a08-82e0-0932b732b485] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:59:06.086526 1102415 system_pods.go:61] "metrics-server-74d5c6b9c-hzcd7" [17e01399-9910-4f01-abe7-3eae271af1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:59:06.086536 1102415 system_pods.go:61] "storage-provisioner" [43705714-97f1-4b06-8eeb-04d60c22112a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 19:59:06.086546 1102415 system_pods.go:74] duration metric: took 15.663084ms to wait for pod list to return data ...
	I0717 19:59:06.086556 1102415 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:59:06.113146 1102415 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:59:06.113186 1102415 node_conditions.go:123] node cpu capacity is 2
	I0717 19:59:06.113203 1102415 node_conditions.go:105] duration metric: took 26.64051ms to run NodePressure ...
	I0717 19:59:06.113228 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:06.757768 1102415 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:59:06.770030 1102415 kubeadm.go:787] kubelet initialised
	I0717 19:59:06.770064 1102415 kubeadm.go:788] duration metric: took 12.262867ms waiting for restarted kubelet to initialise ...
	I0717 19:59:06.770077 1102415 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:06.782569 1102415 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.794688 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.794714 1102415 pod_ready.go:81] duration metric: took 12.110858ms waiting for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.794723 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.794732 1102415 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.812213 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.812265 1102415 pod_ready.go:81] duration metric: took 17.522572ms waiting for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.812281 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.812291 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.838241 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.838291 1102415 pod_ready.go:81] duration metric: took 25.986333ms waiting for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.838306 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.838318 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.869011 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.869127 1102415 pod_ready.go:81] duration metric: took 30.791681ms waiting for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.869155 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.869192 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:07.164422 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-proxy-9qfpg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.164521 1102415 pod_ready.go:81] duration metric: took 295.308967ms waiting for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:07.164549 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-proxy-9qfpg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.164570 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:07.571331 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.571370 1102415 pod_ready.go:81] duration metric: took 406.779012ms waiting for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:07.571383 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.571393 1102415 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:07.967699 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.967740 1102415 pod_ready.go:81] duration metric: took 396.334567ms waiting for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:07.967757 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.967770 1102415 pod_ready.go:38] duration metric: took 1.197678353s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:07.967793 1102415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:59:08.014470 1102415 ops.go:34] apiserver oom_adj: -16
	I0717 19:59:08.014500 1102415 kubeadm.go:640] restartCluster took 22.633851106s
	I0717 19:59:08.014510 1102415 kubeadm.go:406] StartCluster complete in 22.683627985s
	I0717 19:59:08.014534 1102415 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:08.014622 1102415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:59:08.017393 1102415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:08.018018 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:59:08.018126 1102415 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:59:08.018273 1102415 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-711413"
	I0717 19:59:08.018300 1102415 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-711413"
	W0717 19:59:08.018309 1102415 addons.go:240] addon storage-provisioner should already be in state true
	I0717 19:59:08.018404 1102415 host.go:66] Checking if "default-k8s-diff-port-711413" exists ...
	I0717 19:59:08.018400 1102415 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-711413"
	I0717 19:59:08.018457 1102415 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-711413"
	W0717 19:59:08.018471 1102415 addons.go:240] addon metrics-server should already be in state true
	I0717 19:59:08.018538 1102415 host.go:66] Checking if "default-k8s-diff-port-711413" exists ...
	I0717 19:59:08.018864 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.018916 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.018950 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.018997 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.019087 1102415 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-711413"
	I0717 19:59:08.019108 1102415 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-711413"
	I0717 19:59:08.019378 1102415 config.go:182] Loaded profile config "default-k8s-diff-port-711413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:59:08.019724 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.019823 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.028311 1102415 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-711413" context rescaled to 1 replicas
	I0717 19:59:08.028363 1102415 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.51 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:59:08.031275 1102415 out.go:177] * Verifying Kubernetes components...
	I0717 19:59:08.033186 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:59:08.041793 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43183
	I0717 19:59:08.041831 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0717 19:59:08.042056 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0717 19:59:08.042525 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.042709 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.043195 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.043373 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.043382 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.043479 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.043486 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.043911 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.044078 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.044095 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.044514 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.044542 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.044773 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.044878 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.045003 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.045373 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.045401 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.065715 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0717 19:59:08.066371 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.067102 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.067128 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.067662 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.067824 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0717 19:59:08.068091 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.069488 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.070144 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.070163 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.070232 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:59:08.070672 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.070852 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.072648 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:59:08.075752 1102415 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:59:08.077844 1102415 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:04.355036 1102136 addons.go:502] enable addons completed in 3.125961318s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 19:59:06.268158 1102136 node_ready.go:58] node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:08.079803 1102415 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:08.079826 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:59:08.079857 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:59:08.077802 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:59:08.079941 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:59:08.079958 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:59:08.078604 1102415 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-711413"
	W0717 19:59:08.080010 1102415 addons.go:240] addon default-storageclass should already be in state true
	I0717 19:59:08.080048 1102415 host.go:66] Checking if "default-k8s-diff-port-711413" exists ...
	I0717 19:59:08.080446 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.080498 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.084746 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.084836 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.085468 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:59:08.085502 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:59:08.085512 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.085534 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.085599 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:59:08.085738 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:59:08.085851 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:59:08.085998 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:59:08.086028 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:59:08.086182 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:59:08.086221 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:59:08.086298 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:59:08.103113 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41455
	I0717 19:59:08.103751 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.104389 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.104412 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.104985 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.105805 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.105846 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.127906 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I0717 19:59:08.129757 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.130713 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.130734 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.131175 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.133060 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.135496 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:59:08.135824 1102415 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:08.135840 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:59:08.135860 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:59:08.139031 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.139443 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:59:08.139480 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.139855 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:59:08.140455 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:59:08.140850 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:59:08.141145 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:59:08.260742 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:59:08.260779 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:59:08.310084 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:59:08.310123 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:59:08.315228 1102415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:08.333112 1102415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:08.347265 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:08.347297 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:59:08.446018 1102415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:08.602418 1102415 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:59:08.602481 1102415 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-711413" to be "Ready" ...
	I0717 19:59:06.789410 1103141 crio.go:444] Took 2.282067 seconds to copy over tarball
	I0717 19:59:06.789500 1103141 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:59:10.614919 1103141 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.825382729s)
	I0717 19:59:10.614956 1103141 crio.go:451] Took 3.825512 seconds to extract the tarball
	I0717 19:59:10.614970 1103141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:59:10.668773 1103141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:10.721815 1103141 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:59:10.721849 1103141 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:59:10.721928 1103141 ssh_runner.go:195] Run: crio config
	I0717 19:59:10.626470 1102415 node_ready.go:58] node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:11.522603 1102415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.189445026s)
	I0717 19:59:11.522668 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.522681 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.522703 1102415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.207433491s)
	I0717 19:59:11.522747 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.522762 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.523183 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.523208 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.523223 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.523234 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.523247 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.523700 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.523717 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.523768 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.525232 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.525259 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.525269 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.525278 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.526823 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.526841 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.526864 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.526878 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.526889 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.527158 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.527174 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.527190 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.546758 1102415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.100689574s)
	I0717 19:59:11.546840 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.546856 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.548817 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.548900 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.548920 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.548946 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.548966 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.549341 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.549360 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.549374 1102415 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-711413"
	I0717 19:59:11.549385 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.629748 1102415 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:59:07.962879 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:07.963461 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:07.963494 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:07.963408 1103733 retry.go:31] will retry after 1.458776494s: waiting for machine to come up
	I0717 19:59:09.423815 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:09.424545 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:09.424578 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:09.424434 1103733 retry.go:31] will retry after 1.505416741s: waiting for machine to come up
	I0717 19:59:10.932450 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:10.932970 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:10.932999 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:10.932907 1103733 retry.go:31] will retry after 2.119238731s: waiting for machine to come up
	I0717 19:59:08.762965 1102136 node_ready.go:49] node "no-preload-408472" has status "Ready":"True"
	I0717 19:59:08.762999 1102136 node_ready.go:38] duration metric: took 7.016711148s waiting for node "no-preload-408472" to be "Ready" ...
	I0717 19:59:08.763010 1102136 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:08.770929 1102136 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.781876 1102136 pod_ready.go:92] pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:08.781916 1102136 pod_ready.go:81] duration metric: took 10.948677ms waiting for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.781931 1102136 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.790806 1102136 pod_ready.go:92] pod "etcd-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:08.790842 1102136 pod_ready.go:81] duration metric: took 8.902354ms waiting for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.790858 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:11.107348 1102136 pod_ready.go:102] pod "kube-apiserver-no-preload-408472" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:12.306923 1102136 pod_ready.go:92] pod "kube-apiserver-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.306956 1102136 pod_ready.go:81] duration metric: took 3.516087536s waiting for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.306971 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.314504 1102136 pod_ready.go:92] pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.314541 1102136 pod_ready.go:81] duration metric: took 7.560269ms waiting for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.314557 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.323200 1102136 pod_ready.go:92] pod "kube-proxy-cntdn" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.323232 1102136 pod_ready.go:81] duration metric: took 8.667115ms waiting for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.323246 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.367453 1102136 pod_ready.go:92] pod "kube-scheduler-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.367483 1102136 pod_ready.go:81] duration metric: took 44.229894ms waiting for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.367494 1102136 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:11.776332 1102415 addons.go:502] enable addons completed in 3.758222459s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:59:13.118285 1102415 node_ready.go:58] node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:10.806964 1103141 cni.go:84] Creating CNI manager for ""
	I0717 19:59:10.907820 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:10.908604 1103141 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:59:10.908671 1103141 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-114855 NodeName:embed-certs-114855 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:59:10.909456 1103141 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-114855"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:59:10.909661 1103141 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-114855 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-114855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:59:10.909757 1103141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:59:10.933995 1103141 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:59:10.934116 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:59:10.949424 1103141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0717 19:59:10.971981 1103141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:59:10.995942 1103141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0717 19:59:11.021147 1103141 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I0717 19:59:11.027824 1103141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:11.046452 1103141 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855 for IP: 192.168.39.213
	I0717 19:59:11.046507 1103141 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:11.046722 1103141 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:59:11.046792 1103141 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:59:11.046890 1103141 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/client.key
	I0717 19:59:11.046974 1103141 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/apiserver.key.af9d86f2
	I0717 19:59:11.047032 1103141 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/proxy-client.key
	I0717 19:59:11.047198 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:59:11.047246 1103141 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:59:11.047262 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:59:11.047297 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:59:11.047330 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:59:11.047360 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:59:11.047422 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:11.048308 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:59:11.076826 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:59:11.116981 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:59:11.152433 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:59:11.186124 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:59:11.219052 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:59:11.251034 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:59:11.281026 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:59:11.314219 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:59:11.341636 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:59:11.372920 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:59:11.403343 1103141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:59:11.428094 1103141 ssh_runner.go:195] Run: openssl version
	I0717 19:59:11.435909 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:59:11.455770 1103141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:11.463749 1103141 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:11.463851 1103141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:11.473784 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:59:11.490867 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:59:11.507494 1103141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:59:11.514644 1103141 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:59:11.514746 1103141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:59:11.523975 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:59:11.539528 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:59:11.552649 1103141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:59:11.559671 1103141 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:59:11.559757 1103141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:59:11.569190 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:59:11.584473 1103141 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:59:11.590453 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:59:11.599427 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:59:11.607503 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:59:11.619641 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:59:11.627914 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:59:11.636600 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:59:11.645829 1103141 kubeadm.go:404] StartCluster: {Name:embed-certs-114855 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-114855 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:59:11.645960 1103141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:59:11.646049 1103141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:11.704959 1103141 cri.go:89] found id: ""
	I0717 19:59:11.705078 1103141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:59:11.720588 1103141 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:59:11.720621 1103141 kubeadm.go:636] restartCluster start
	I0717 19:59:11.720697 1103141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:59:11.734693 1103141 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:11.736236 1103141 kubeconfig.go:92] found "embed-certs-114855" server: "https://192.168.39.213:8443"
	I0717 19:59:11.739060 1103141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:59:11.752975 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:11.753096 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:11.766287 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:12.266751 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:12.266867 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:12.281077 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:12.766565 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:12.766669 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:12.780460 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:13.267185 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:13.267305 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:13.286250 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:13.766474 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:13.766582 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:13.780973 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:14.266474 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:14.266565 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:14.283412 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:14.766783 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:14.766885 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:14.782291 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:15.266607 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:15.266721 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:15.279993 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:13.054320 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:13.054787 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:13.054821 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:13.054724 1103733 retry.go:31] will retry after 2.539531721s: waiting for machine to come up
	I0717 19:59:15.597641 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:15.598199 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:15.598235 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:15.598132 1103733 retry.go:31] will retry after 3.376944775s: waiting for machine to come up
	I0717 19:59:14.773506 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:16.778529 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:14.611538 1102415 node_ready.go:49] node "default-k8s-diff-port-711413" has status "Ready":"True"
	I0717 19:59:14.611573 1102415 node_ready.go:38] duration metric: took 6.009046151s waiting for node "default-k8s-diff-port-711413" to be "Ready" ...
	I0717 19:59:14.611583 1102415 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:14.620522 1102415 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.629345 1102415 pod_ready.go:92] pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:14.629380 1102415 pod_ready.go:81] duration metric: took 8.831579ms waiting for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.629394 1102415 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.636756 1102415 pod_ready.go:92] pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:14.636781 1102415 pod_ready.go:81] duration metric: took 7.379506ms waiting for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.636791 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.658668 1102415 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:16.658699 1102415 pod_ready.go:81] duration metric: took 2.021899463s waiting for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.658715 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.667666 1102415 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:16.667695 1102415 pod_ready.go:81] duration metric: took 8.971091ms waiting for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.667709 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.677402 1102415 pod_ready.go:92] pod "kube-proxy-9qfpg" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:16.677433 1102415 pod_ready.go:81] duration metric: took 9.71529ms waiting for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.677448 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:17.011304 1102415 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:17.011332 1102415 pod_ready.go:81] duration metric: took 333.876392ms waiting for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:17.011344 1102415 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:15.766793 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:15.766913 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:15.780587 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:16.266363 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:16.266491 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:16.281228 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:16.766575 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:16.766690 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:16.782127 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:17.266511 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:17.266610 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:17.282119 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:17.766652 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:17.766758 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:17.783972 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:18.266759 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:18.266855 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:18.284378 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:18.766574 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:18.766675 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:18.782934 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:19.266475 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:19.266577 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:19.280895 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:19.767307 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:19.767411 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:19.781007 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:20.266522 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:20.266646 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:20.280722 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:18.976814 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:18.977300 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:18.977326 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:18.977254 1103733 retry.go:31] will retry after 2.728703676s: waiting for machine to come up
	I0717 19:59:21.709422 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:21.709889 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:21.709916 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:21.709841 1103733 retry.go:31] will retry after 5.373130791s: waiting for machine to come up
	I0717 19:59:19.273610 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:21.274431 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:19.419889 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:21.422395 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:23.423974 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:20.767398 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:20.767505 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:20.780641 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:21.266963 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:21.267053 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:21.280185 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:21.753855 1103141 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:59:21.753890 1103141 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:59:21.753905 1103141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:59:21.753969 1103141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:21.792189 1103141 cri.go:89] found id: ""
	I0717 19:59:21.792276 1103141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:59:21.809670 1103141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:59:21.820341 1103141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:59:21.820408 1103141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:21.830164 1103141 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:21.830194 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:21.961988 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:22.788248 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:23.013910 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:23.110334 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:23.204343 1103141 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:59:23.204448 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:23.721708 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:24.222046 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:24.721482 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:25.221523 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:25.721720 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:23.773347 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:26.275805 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:25.424115 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:27.920288 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:27.084831 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.085274 1101908 main.go:141] libmachine: (old-k8s-version-149000) Found IP for machine: 192.168.50.177
	I0717 19:59:27.085299 1101908 main.go:141] libmachine: (old-k8s-version-149000) Reserving static IP address...
	I0717 19:59:27.085332 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has current primary IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.085757 1101908 main.go:141] libmachine: (old-k8s-version-149000) Reserved static IP address: 192.168.50.177
	I0717 19:59:27.085799 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "old-k8s-version-149000", mac: "52:54:00:88:d8:03", ip: "192.168.50.177"} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.085821 1101908 main.go:141] libmachine: (old-k8s-version-149000) Waiting for SSH to be available...
	I0717 19:59:27.085855 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | skip adding static IP to network mk-old-k8s-version-149000 - found existing host DHCP lease matching {name: "old-k8s-version-149000", mac: "52:54:00:88:d8:03", ip: "192.168.50.177"}
	I0717 19:59:27.085880 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Getting to WaitForSSH function...
	I0717 19:59:27.088245 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.088569 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.088605 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.088777 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Using SSH client type: external
	I0717 19:59:27.088809 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa (-rw-------)
	I0717 19:59:27.088850 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:59:27.088866 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | About to run SSH command:
	I0717 19:59:27.088877 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | exit 0
	I0717 19:59:27.186039 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | SSH cmd err, output: <nil>: 
	I0717 19:59:27.186549 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetConfigRaw
	I0717 19:59:27.187427 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:27.190317 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.190738 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.190781 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.191089 1101908 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/config.json ...
	I0717 19:59:27.191343 1101908 machine.go:88] provisioning docker machine ...
	I0717 19:59:27.191369 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:27.191637 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetMachineName
	I0717 19:59:27.191875 1101908 buildroot.go:166] provisioning hostname "old-k8s-version-149000"
	I0717 19:59:27.191902 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetMachineName
	I0717 19:59:27.192058 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.194710 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.195141 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.195190 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.195472 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.195752 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.195938 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.196104 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.196308 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:27.196731 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:27.196746 1101908 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-149000 && echo "old-k8s-version-149000" | sudo tee /etc/hostname
	I0717 19:59:27.338648 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-149000
	
	I0717 19:59:27.338712 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.341719 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.342138 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.342176 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.342392 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.342666 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.342879 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.343036 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.343216 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:27.343733 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:27.343763 1101908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-149000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-149000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-149000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:59:27.478006 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:59:27.478054 1101908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:59:27.478109 1101908 buildroot.go:174] setting up certificates
	I0717 19:59:27.478130 1101908 provision.go:83] configureAuth start
	I0717 19:59:27.478150 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetMachineName
	I0717 19:59:27.478485 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:27.481425 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.481865 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.481900 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.482029 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.484825 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.485290 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.485326 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.485505 1101908 provision.go:138] copyHostCerts
	I0717 19:59:27.485604 1101908 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:59:27.485633 1101908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:59:27.485709 1101908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:59:27.485837 1101908 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:59:27.485849 1101908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:59:27.485879 1101908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:59:27.485957 1101908 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:59:27.485970 1101908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:59:27.485997 1101908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:59:27.486131 1101908 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-149000 san=[192.168.50.177 192.168.50.177 localhost 127.0.0.1 minikube old-k8s-version-149000]
	I0717 19:59:27.667436 1101908 provision.go:172] copyRemoteCerts
	I0717 19:59:27.667514 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:59:27.667551 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.670875 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.671304 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.671340 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.671600 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.671851 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.672053 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.672222 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 19:59:27.764116 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:59:27.795726 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 19:59:27.827532 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:59:27.859734 1101908 provision.go:86] duration metric: configureAuth took 381.584228ms
	I0717 19:59:27.859769 1101908 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:59:27.860014 1101908 config.go:182] Loaded profile config "old-k8s-version-149000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 19:59:27.860125 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.863330 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.863915 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.863969 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.864318 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.864559 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.864735 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.864931 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.865114 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:27.865768 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:27.865791 1101908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:59:28.221755 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:59:28.221788 1101908 machine.go:91] provisioned docker machine in 1.030429206s
	I0717 19:59:28.221802 1101908 start.go:300] post-start starting for "old-k8s-version-149000" (driver="kvm2")
	I0717 19:59:28.221817 1101908 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:59:28.221868 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.222236 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:59:28.222265 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.225578 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.226092 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.226130 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.226268 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.226511 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.226695 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.226875 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 19:59:28.321338 1101908 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:59:28.326703 1101908 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:59:28.326747 1101908 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:59:28.326843 1101908 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:59:28.326969 1101908 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:59:28.327239 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:59:28.337536 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:28.366439 1101908 start.go:303] post-start completed in 144.619105ms
	I0717 19:59:28.366476 1101908 fix.go:56] fixHost completed within 25.763256574s
	I0717 19:59:28.366510 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.369661 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.370194 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.370249 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.370470 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.370758 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.370956 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.371192 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.371476 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:28.371943 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:28.371970 1101908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:59:28.498983 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623968.431200547
	
	I0717 19:59:28.499015 1101908 fix.go:206] guest clock: 1689623968.431200547
	I0717 19:59:28.499025 1101908 fix.go:219] Guest: 2023-07-17 19:59:28.431200547 +0000 UTC Remote: 2023-07-17 19:59:28.366482535 +0000 UTC m=+386.593094928 (delta=64.718012ms)
	I0717 19:59:28.499083 1101908 fix.go:190] guest clock delta is within tolerance: 64.718012ms
	I0717 19:59:28.499090 1101908 start.go:83] releasing machines lock for "old-k8s-version-149000", held for 25.895913429s
	I0717 19:59:28.499122 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.499449 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:28.502760 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.503338 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.503395 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.503746 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.504549 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.504804 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.504907 1101908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:59:28.504995 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.505142 1101908 ssh_runner.go:195] Run: cat /version.json
	I0717 19:59:28.505175 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.508832 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.508868 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.509347 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.509384 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.509412 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.509431 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.509539 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.509827 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.509888 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.510074 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.510126 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.510292 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.510284 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 19:59:28.510442 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	W0717 19:59:28.604171 1101908 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:59:28.604283 1101908 ssh_runner.go:195] Run: systemctl --version
	I0717 19:59:28.637495 1101908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:59:28.790306 1101908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:59:28.797261 1101908 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:59:28.797343 1101908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:59:28.822016 1101908 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:59:28.822056 1101908 start.go:469] detecting cgroup driver to use...
	I0717 19:59:28.822144 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:59:28.843785 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:59:28.863178 1101908 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:59:28.863248 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:59:28.880265 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:59:28.897122 1101908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:59:29.019759 1101908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:59:29.166490 1101908 docker.go:212] disabling docker service ...
	I0717 19:59:29.166561 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:59:29.188125 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:59:29.205693 1101908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:59:29.336805 1101908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:59:29.478585 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:59:29.494755 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:59:29.516478 1101908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0717 19:59:29.516633 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.527902 1101908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:59:29.528000 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.539443 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.551490 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.563407 1101908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:59:29.577575 1101908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:59:29.587749 1101908 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:59:29.587839 1101908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:59:29.602120 1101908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:59:29.613647 1101908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:59:29.730721 1101908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:59:29.907780 1101908 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:59:29.907916 1101908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:59:29.913777 1101908 start.go:537] Will wait 60s for crictl version
	I0717 19:59:29.913855 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:29.921083 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:59:29.955985 1101908 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:59:29.956099 1101908 ssh_runner.go:195] Run: crio --version
	I0717 19:59:30.011733 1101908 ssh_runner.go:195] Run: crio --version
	I0717 19:59:30.068591 1101908 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0717 19:59:25.744228 1103141 api_server.go:72] duration metric: took 2.539876638s to wait for apiserver process to appear ...
	I0717 19:59:25.744263 1103141 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:59:25.744295 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:25.744850 1103141 api_server.go:269] stopped: https://192.168.39.213:8443/healthz: Get "https://192.168.39.213:8443/healthz": dial tcp 192.168.39.213:8443: connect: connection refused
	I0717 19:59:26.245930 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.163298 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:29.163345 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:29.163362 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.197738 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:29.197812 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:29.245946 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.261723 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:29.261777 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:29.745343 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.753999 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:29.754040 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:30.245170 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:30.253748 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:30.253809 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:30.745290 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:30.760666 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:30.760706 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:31.244952 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:31.262412 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0717 19:59:31.284253 1103141 api_server.go:141] control plane version: v1.27.3
	I0717 19:59:31.284290 1103141 api_server.go:131] duration metric: took 5.540019245s to wait for apiserver health ...
	I0717 19:59:31.284303 1103141 cni.go:84] Creating CNI manager for ""
	I0717 19:59:31.284316 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:31.286828 1103141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:59:30.070665 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:30.074049 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:30.074479 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:30.074503 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:30.074871 1101908 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 19:59:30.080177 1101908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:30.094479 1101908 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 19:59:30.094543 1101908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:30.130526 1101908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 19:59:30.130599 1101908 ssh_runner.go:195] Run: which lz4
	I0717 19:59:30.135920 1101908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:59:30.140678 1101908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:59:30.140723 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0717 19:59:28.772996 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:30.785175 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:33.273857 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:30.427017 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:32.920586 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:31.288869 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:59:31.323116 1103141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:59:31.368054 1103141 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:59:31.392061 1103141 system_pods.go:59] 8 kube-system pods found
	I0717 19:59:31.392110 1103141 system_pods.go:61] "coredns-5d78c9869d-rgdz8" [d1cc8cd3-70eb-4315-89d9-40d4ef97a5c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:59:31.392122 1103141 system_pods.go:61] "etcd-embed-certs-114855" [4c8e5fe0-e26e-4244-b284-5a42b4247614] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:59:31.392136 1103141 system_pods.go:61] "kube-apiserver-embed-certs-114855" [3cc43f5e-6c56-4587-a69a-ce58c12f500d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:59:31.392146 1103141 system_pods.go:61] "kube-controller-manager-embed-certs-114855" [cadca801-1feb-45f9-ac3c-eca697f1919f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:59:31.392157 1103141 system_pods.go:61] "kube-proxy-lkncr" [9ec4e4ac-81a5-4547-ab3e-6a3db21cc19d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 19:59:31.392166 1103141 system_pods.go:61] "kube-scheduler-embed-certs-114855" [0e9a0435-a1d5-42bc-a051-1587cd479ac6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:59:31.392184 1103141 system_pods.go:61] "metrics-server-74d5c6b9c-pshr5" [2d4e6b33-c325-4aa5-8458-b604be762cbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:59:31.392192 1103141 system_pods.go:61] "storage-provisioner" [4f7b39f3-3fc5-4e41-9f58-aa1d938ce06f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 19:59:31.392199 1103141 system_pods.go:74] duration metric: took 24.119934ms to wait for pod list to return data ...
	I0717 19:59:31.392210 1103141 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:59:31.405136 1103141 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:59:31.405178 1103141 node_conditions.go:123] node cpu capacity is 2
	I0717 19:59:31.405192 1103141 node_conditions.go:105] duration metric: took 12.975462ms to run NodePressure ...
	I0717 19:59:31.405221 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:32.158757 1103141 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:59:32.167221 1103141 kubeadm.go:787] kubelet initialised
	I0717 19:59:32.167263 1103141 kubeadm.go:788] duration metric: took 8.462047ms waiting for restarted kubelet to initialise ...
	I0717 19:59:32.167277 1103141 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:32.178888 1103141 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:34.199125 1103141 pod_ready.go:102] pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:32.017439 1101908 crio.go:444] Took 1.881555 seconds to copy over tarball
	I0717 19:59:32.017535 1101908 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:59:35.573024 1101908 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.55545349s)
	I0717 19:59:35.573070 1101908 crio.go:451] Took 3.555594 seconds to extract the tarball
	I0717 19:59:35.573081 1101908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:59:35.622240 1101908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:35.672113 1101908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 19:59:35.672149 1101908 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:59:35.672223 1101908 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:35.672279 1101908 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:35.672325 1101908 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.672344 1101908 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:35.672453 1101908 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:35.672533 1101908 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 19:59:35.672545 1101908 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:35.672645 1101908 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 19:59:35.674063 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:35.674110 1101908 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 19:59:35.674127 1101908 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.674114 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:35.674068 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:35.674075 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:35.674208 1101908 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:35.674236 1101908 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 19:59:35.835219 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.840811 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:35.855242 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 19:59:35.857212 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:35.860547 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:35.864234 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:35.864519 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 19:59:35.958693 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:35.980110 1101908 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 19:59:35.980198 1101908 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.980258 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.057216 1101908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 19:59:36.057278 1101908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:36.057301 1101908 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 19:59:36.057334 1101908 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0717 19:59:36.057342 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.057362 1101908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 19:59:36.057383 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.057412 1101908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:36.057451 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.066796 1101908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 19:59:36.066859 1101908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:36.066944 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.084336 1101908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 19:59:36.084398 1101908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:36.084439 1101908 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 19:59:36.084454 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.084479 1101908 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 19:59:36.084520 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.208377 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:36.208641 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:36.208730 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0717 19:59:36.208827 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:36.208839 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:36.208879 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:36.208922 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0717 19:59:36.375090 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 19:59:36.375371 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 19:59:36.383660 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 19:59:36.383770 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 19:59:36.383841 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 19:59:36.383872 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 19:59:36.383950 1101908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0717 19:59:36.383986 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 19:59:36.388877 1101908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0717 19:59:36.388897 1101908 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0717 19:59:36.388941 1101908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0717 19:59:35.275990 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:37.773385 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:34.927926 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:36.940406 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:36.219570 1103141 pod_ready.go:102] pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:37.338137 1103141 pod_ready.go:92] pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:37.338209 1103141 pod_ready.go:81] duration metric: took 5.159283632s waiting for pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:37.338228 1103141 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:39.354623 1103141 pod_ready.go:102] pod "etcd-embed-certs-114855" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:37.751639 1101908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.362667245s)
	I0717 19:59:37.751681 1101908 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0717 19:59:37.751736 1101908 cache_images.go:92] LoadImages completed in 2.079569378s
	W0717 19:59:37.751899 1101908 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0717 19:59:37.752005 1101908 ssh_runner.go:195] Run: crio config
	I0717 19:59:37.844809 1101908 cni.go:84] Creating CNI manager for ""
	I0717 19:59:37.844845 1101908 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:37.844870 1101908 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:59:37.844896 1101908 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.177 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-149000 NodeName:old-k8s-version-149000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 19:59:37.845116 1101908 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-149000"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-149000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.177:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:59:37.845228 1101908 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-149000 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-149000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:59:37.845312 1101908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 19:59:37.859556 1101908 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:59:37.859640 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:59:37.872740 1101908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 19:59:37.891132 1101908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:59:37.911902 1101908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0717 19:59:37.933209 1101908 ssh_runner.go:195] Run: grep 192.168.50.177	control-plane.minikube.internal$ /etc/hosts
	I0717 19:59:37.937317 1101908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:37.950660 1101908 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000 for IP: 192.168.50.177
	I0717 19:59:37.950706 1101908 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:37.950921 1101908 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:59:37.950998 1101908 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:59:37.951128 1101908 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.key
	I0717 19:59:37.951227 1101908 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/apiserver.key.c699d2bc
	I0717 19:59:37.951298 1101908 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/proxy-client.key
	I0717 19:59:37.951487 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:59:37.951529 1101908 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:59:37.951541 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:59:37.951567 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:59:37.951593 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:59:37.951634 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:59:37.951691 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:37.952597 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:59:37.980488 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:59:38.008389 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:59:38.037605 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:59:38.066142 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:59:38.095838 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:59:38.123279 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:59:38.158528 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:59:38.190540 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:59:38.218519 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:59:38.245203 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:59:38.273077 1101908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:59:38.292610 1101908 ssh_runner.go:195] Run: openssl version
	I0717 19:59:38.298983 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:59:38.311477 1101908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:59:38.316847 1101908 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:59:38.316914 1101908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:59:38.323114 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:59:38.334773 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:59:38.346327 1101908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:59:38.351639 1101908 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:59:38.351712 1101908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:59:38.357677 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:59:38.369278 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:59:38.380948 1101908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:38.386116 1101908 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:38.386181 1101908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:38.392204 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:59:38.404677 1101908 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:59:38.409861 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:59:38.416797 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:59:38.424606 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:59:38.431651 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:59:38.439077 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:59:38.445660 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:59:38.452464 1101908 kubeadm.go:404] StartCluster: {Name:old-k8s-version-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-149000 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.177 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:59:38.452656 1101908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:59:38.452738 1101908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:38.485873 1101908 cri.go:89] found id: ""
	I0717 19:59:38.485972 1101908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:59:38.496998 1101908 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:59:38.497033 1101908 kubeadm.go:636] restartCluster start
	I0717 19:59:38.497096 1101908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:59:38.508054 1101908 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:38.509416 1101908 kubeconfig.go:92] found "old-k8s-version-149000" server: "https://192.168.50.177:8443"
	I0717 19:59:38.512586 1101908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:59:38.524412 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:38.524486 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:38.537824 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:39.038221 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:39.038331 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:39.053301 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:39.538741 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:39.538834 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:39.552525 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:40.038056 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:40.038173 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:40.052410 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:40.537953 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:40.538060 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:40.551667 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:41.038241 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:41.038361 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:41.053485 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:41.538300 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:41.538402 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:41.552741 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:39.773598 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:42.273083 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:39.423700 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:41.918498 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:43.918876 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:40.856641 1103141 pod_ready.go:92] pod "etcd-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:40.856671 1103141 pod_ready.go:81] duration metric: took 3.518433579s waiting for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:40.856684 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.377156 1103141 pod_ready.go:92] pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.377186 1103141 pod_ready.go:81] duration metric: took 1.520494525s waiting for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.377196 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.387651 1103141 pod_ready.go:92] pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.387680 1103141 pod_ready.go:81] duration metric: took 10.47667ms waiting for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.387692 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lkncr" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.394735 1103141 pod_ready.go:92] pod "kube-proxy-lkncr" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.394770 1103141 pod_ready.go:81] duration metric: took 7.070744ms waiting for pod "kube-proxy-lkncr" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.394784 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.402496 1103141 pod_ready.go:92] pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.402530 1103141 pod_ready.go:81] duration metric: took 7.737273ms waiting for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.402544 1103141 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:44.460075 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:42.038941 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:42.039027 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:42.054992 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:42.538144 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:42.538257 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:42.552160 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:43.038484 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:43.038599 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:43.052649 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:43.538407 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:43.538511 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:43.552927 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:44.038266 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:44.038396 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:44.051851 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:44.538425 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:44.538520 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:44.551726 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:45.038244 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:45.038359 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:45.053215 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:45.538908 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:45.539008 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:45.552009 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:46.038089 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:46.038204 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:46.051955 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:46.538209 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:46.538311 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:46.552579 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:44.273154 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:46.772548 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:45.919143 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:47.919930 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:46.964219 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:49.459411 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:47.038345 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:47.038434 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:47.051506 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:47.538770 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:47.538855 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:47.551813 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:48.038766 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:48.038900 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:48.053717 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:48.524471 1101908 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:59:48.524521 1101908 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:59:48.524542 1101908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:59:48.524625 1101908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:48.564396 1101908 cri.go:89] found id: ""
	I0717 19:59:48.564475 1101908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:59:48.582891 1101908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:59:48.594121 1101908 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:59:48.594212 1101908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:48.604963 1101908 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:48.604998 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:48.756875 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:49.645754 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:49.876047 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:49.996960 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:50.109251 1101908 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:59:50.109337 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:50.630868 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:51.130836 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:51.630446 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:51.659578 1101908 api_server.go:72] duration metric: took 1.550325604s to wait for apiserver process to appear ...
	I0717 19:59:51.659605 1101908 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:59:51.659625 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:48.773967 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:50.775054 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:53.274949 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:49.922365 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:52.422385 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:51.459819 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:53.958809 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:56.660515 1101908 api_server.go:269] stopped: https://192.168.50.177:8443/healthz: Get "https://192.168.50.177:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 19:59:55.773902 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:58.274862 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:54.427715 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:56.922668 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:57.161458 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:57.720749 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:57.720797 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:57.720816 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:57.828454 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:57.828489 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:58.160896 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:58.173037 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 19:59:58.173072 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 19:59:58.660738 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:58.672508 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 19:59:58.672551 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 19:59:59.161133 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:59.169444 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 200:
	ok
	I0717 19:59:59.179637 1101908 api_server.go:141] control plane version: v1.16.0
	I0717 19:59:59.179675 1101908 api_server.go:131] duration metric: took 7.520063574s to wait for apiserver health ...
	I0717 19:59:59.179689 1101908 cni.go:84] Creating CNI manager for ""
	I0717 19:59:59.179703 1101908 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:59.182357 1101908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:59:55.959106 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:58.458415 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:00.458582 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:59.184702 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:59:59.197727 1101908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:59:59.226682 1101908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:59:59.237874 1101908 system_pods.go:59] 7 kube-system pods found
	I0717 19:59:59.237911 1101908 system_pods.go:61] "coredns-5644d7b6d9-g7fjx" [f9f27bce-aaf6-43f8-8a4b-a87230ceed0e] Running
	I0717 19:59:59.237917 1101908 system_pods.go:61] "etcd-old-k8s-version-149000" [2c732d6d-8a38-401d-aebf-e439c7fcf530] Running
	I0717 19:59:59.237922 1101908 system_pods.go:61] "kube-apiserver-old-k8s-version-149000" [b7f2c355-86cd-4d4c-b7da-043094174829] Running
	I0717 19:59:59.237927 1101908 system_pods.go:61] "kube-controller-manager-old-k8s-version-149000" [30f723aa-a978-4fbb-9210-43a29284aa41] Running
	I0717 19:59:59.237931 1101908 system_pods.go:61] "kube-proxy-f68hg" [a39dea78-e9bb-4f1b-8615-a51a42c6d13f] Running
	I0717 19:59:59.237935 1101908 system_pods.go:61] "kube-scheduler-old-k8s-version-149000" [a84bce5d-82af-4282-a36f-0d1031715a1a] Running
	I0717 19:59:59.237938 1101908 system_pods.go:61] "storage-provisioner" [c5e96cda-ddbc-4d29-86c3-d7ac4c717f61] Running
	I0717 19:59:59.237944 1101908 system_pods.go:74] duration metric: took 11.222716ms to wait for pod list to return data ...
	I0717 19:59:59.237952 1101908 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:59:59.241967 1101908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:59:59.242003 1101908 node_conditions.go:123] node cpu capacity is 2
	I0717 19:59:59.242051 1101908 node_conditions.go:105] duration metric: took 4.091279ms to run NodePressure ...
	I0717 19:59:59.242080 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:59.612659 1101908 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:59:59.623317 1101908 retry.go:31] will retry after 338.189596ms: kubelet not initialised
	I0717 19:59:59.972718 1101908 retry.go:31] will retry after 522.339878ms: kubelet not initialised
	I0717 20:00:00.503134 1101908 retry.go:31] will retry after 523.863562ms: kubelet not initialised
	I0717 20:00:01.032819 1101908 retry.go:31] will retry after 993.099088ms: kubelet not initialised
	I0717 20:00:00.773342 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:02.775558 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:59.424228 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:01.424791 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:03.920321 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:02.462125 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:04.960081 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:02.031287 1101908 retry.go:31] will retry after 1.744721946s: kubelet not initialised
	I0717 20:00:03.780335 1101908 retry.go:31] will retry after 2.704259733s: kubelet not initialised
	I0717 20:00:06.491260 1101908 retry.go:31] will retry after 2.934973602s: kubelet not initialised
	I0717 20:00:05.273963 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:07.772710 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:06.428014 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:08.920105 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:07.459314 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:09.959084 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:09.433009 1101908 retry.go:31] will retry after 2.28873038s: kubelet not initialised
	I0717 20:00:11.729010 1101908 retry.go:31] will retry after 4.261199393s: kubelet not initialised
	I0717 20:00:09.772754 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:11.773102 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:11.424610 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:13.922384 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:11.959437 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:14.459152 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:15.999734 1101908 retry.go:31] will retry after 8.732603244s: kubelet not initialised
	I0717 20:00:14.278965 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:16.772786 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:16.424980 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:18.919729 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:16.460363 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:18.960012 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:18.773609 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:21.272529 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:23.272642 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:20.922495 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:23.422032 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:21.460808 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:23.959242 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:24.739282 1101908 retry.go:31] will retry after 8.040459769s: kubelet not initialised
	I0717 20:00:25.274297 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:27.773410 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:25.923167 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:28.420939 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:25.959431 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:27.960549 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:30.459601 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:30.274460 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.276595 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:30.428741 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.919601 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.459855 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:34.960084 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.784544 1101908 kubeadm.go:787] kubelet initialised
	I0717 20:00:32.784571 1101908 kubeadm.go:788] duration metric: took 33.171875609s waiting for restarted kubelet to initialise ...
	I0717 20:00:32.784579 1101908 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:00:32.789500 1101908 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9x6xd" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.795369 1101908 pod_ready.go:92] pod "coredns-5644d7b6d9-9x6xd" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.795396 1101908 pod_ready.go:81] duration metric: took 5.860061ms waiting for pod "coredns-5644d7b6d9-9x6xd" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.795406 1101908 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-g7fjx" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.800899 1101908 pod_ready.go:92] pod "coredns-5644d7b6d9-g7fjx" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.800922 1101908 pod_ready.go:81] duration metric: took 5.509805ms waiting for pod "coredns-5644d7b6d9-g7fjx" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.800931 1101908 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.806100 1101908 pod_ready.go:92] pod "etcd-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.806123 1101908 pod_ready.go:81] duration metric: took 5.185189ms waiting for pod "etcd-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.806139 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.810963 1101908 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.810990 1101908 pod_ready.go:81] duration metric: took 4.843622ms waiting for pod "kube-apiserver-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.811000 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.183907 1101908 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:33.183945 1101908 pod_ready.go:81] duration metric: took 372.931164ms waiting for pod "kube-controller-manager-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.183961 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f68hg" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.585028 1101908 pod_ready.go:92] pod "kube-proxy-f68hg" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:33.585064 1101908 pod_ready.go:81] duration metric: took 401.095806ms waiting for pod "kube-proxy-f68hg" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.585075 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.984668 1101908 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:33.984702 1101908 pod_ready.go:81] duration metric: took 399.618516ms waiting for pod "kube-scheduler-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.984719 1101908 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:36.392779 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:34.774126 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:37.273706 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:34.921839 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:37.434861 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:37.460518 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:39.960345 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:38.393483 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:40.893085 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:39.773390 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:41.773759 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:39.920512 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:41.920773 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:43.921648 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:42.458830 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:44.958864 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:43.393911 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:45.395481 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:44.273504 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:46.772509 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:45.923812 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:48.422996 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:47.459707 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:49.960056 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:47.892578 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:50.393881 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:48.774960 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:51.273048 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:50.919768 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:52.920372 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:52.458962 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:54.460345 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:52.892172 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:54.893802 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:53.775343 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:56.272701 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:55.427664 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:57.919163 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:56.961203 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:59.458439 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:57.393429 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:59.892089 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:58.772852 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:00.773814 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:03.272058 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:59.920118 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:01.920524 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:01.459281 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:03.460348 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:01.892908 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:04.392588 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:06.393093 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:05.272559 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:07.273883 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:04.421056 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:06.931053 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:05.960254 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:08.457727 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:10.459842 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:08.394141 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:10.892223 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:09.772505 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:11.772971 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:09.422626 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:11.423328 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:13.424365 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:12.958612 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:14.965490 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:12.893418 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:15.394472 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:14.272688 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:16.273685 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:15.919394 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:17.923047 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:17.460160 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:19.958439 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:17.894003 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:19.894407 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:18.772990 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:21.272821 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:23.273740 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:20.427751 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:22.920375 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:21.959239 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:23.959721 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:22.392669 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:24.392858 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:26.392896 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:25.773792 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:28.272610 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:25.423969 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:27.920156 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:25.960648 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:28.460460 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:28.393135 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:30.892597 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:30.273479 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:32.772964 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:29.920769 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:31.921078 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:30.959214 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:33.459431 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:32.892662 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:34.893997 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:35.271152 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:37.273194 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:34.423090 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:36.920078 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:35.960397 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:38.458322 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:40.459780 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:37.393337 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:39.394287 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:39.772604 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:42.273098 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:39.421175 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:41.422356 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:43.920740 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:42.959038 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:45.461396 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:41.891807 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:43.892286 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:45.894698 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:44.772741 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:46.774412 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:46.424856 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:48.425180 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:47.959378 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:49.960002 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:48.392683 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:50.393690 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:49.275313 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:51.773822 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:50.919701 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:52.919921 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:52.459957 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:54.958709 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:52.894991 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:55.392555 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:54.273372 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:56.775369 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:54.920834 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:56.921032 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:57.458730 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.460912 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:57.393828 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.892700 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.272482 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.774098 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.429623 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.920129 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:03.920308 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.958119 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:03.958450 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.894130 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:03.894522 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:05.895253 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:04.273903 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:06.773689 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:06.424487 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.427374 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:05.961652 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.457716 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:10.458998 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.392784 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:10.393957 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.774235 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:11.272040 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:13.273524 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:10.920257 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:12.921203 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:12.459321 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:14.460373 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:12.893440 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:15.392849 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:15.774096 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:18.274263 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:15.421911 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:17.922223 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:16.461304 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:18.958236 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:17.393857 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:19.893380 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:20.274441 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:22.773139 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:20.426046 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:22.919646 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:20.959049 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:23.460465 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:22.392918 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:24.892470 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:25.273192 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:27.273498 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:24.919892 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:26.921648 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:25.961037 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:28.458547 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:26.893611 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:29.393411 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:31.393789 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:29.771999 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:31.772639 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:29.419744 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:31.420846 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:33.422484 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:30.958391 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:33.457895 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:35.459845 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:33.893731 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:36.393503 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:34.272758 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:36.275172 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:35.920446 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:37.922565 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:37.460196 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:39.957808 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:38.394837 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:40.900948 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:38.772728 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:40.773003 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:43.273981 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:40.421480 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:42.919369 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:42.458683 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:44.458762 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:43.392899 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:45.893528 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:45.774587 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:48.273073 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:45.422093 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:47.429470 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:46.958556 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:49.457855 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:47.895376 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:50.392344 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:50.771704 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:52.772560 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:49.918779 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:51.919087 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:51.463426 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:53.957695 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:52.894219 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:54.894786 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:55.273619 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:57.775426 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:54.421093 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:56.424484 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:58.921289 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:55.959421 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:57.960287 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:00.460659 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:57.393604 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:59.394180 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:00.272948 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:02.274904 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:01.421007 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:03.422071 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:02.965138 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:05.458181 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:01.891831 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:03.892978 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:05.895017 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:04.772127 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:07.274312 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:05.920564 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:08.420835 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:07.459555 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:09.460645 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:08.392743 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:10.892887 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:09.772353 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:11.772877 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:12.368174 1102136 pod_ready.go:81] duration metric: took 4m0.000660307s waiting for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	E0717 20:03:12.368224 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:03:12.368251 1102136 pod_ready.go:38] duration metric: took 4m3.60522468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:03:12.368299 1102136 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:03:12.368343 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:12.368422 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:12.425640 1102136 cri.go:89] found id: "eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:12.425667 1102136 cri.go:89] found id: ""
	I0717 20:03:12.425684 1102136 logs.go:284] 1 containers: [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3]
	I0717 20:03:12.425749 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.430857 1102136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:12.430926 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:12.464958 1102136 cri.go:89] found id: "4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:12.464987 1102136 cri.go:89] found id: ""
	I0717 20:03:12.464996 1102136 logs.go:284] 1 containers: [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc]
	I0717 20:03:12.465063 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.470768 1102136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:12.470865 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:12.509622 1102136 cri.go:89] found id: "63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:12.509655 1102136 cri.go:89] found id: ""
	I0717 20:03:12.509665 1102136 logs.go:284] 1 containers: [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce]
	I0717 20:03:12.509718 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.514266 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:12.514346 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:12.556681 1102136 cri.go:89] found id: "0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:12.556705 1102136 cri.go:89] found id: ""
	I0717 20:03:12.556713 1102136 logs.go:284] 1 containers: [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5]
	I0717 20:03:12.556779 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.561653 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:12.561749 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:12.595499 1102136 cri.go:89] found id: "c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:12.595527 1102136 cri.go:89] found id: ""
	I0717 20:03:12.595537 1102136 logs.go:284] 1 containers: [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a]
	I0717 20:03:12.595603 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.600644 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:12.600728 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:12.635293 1102136 cri.go:89] found id: "2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:12.635327 1102136 cri.go:89] found id: ""
	I0717 20:03:12.635341 1102136 logs.go:284] 1 containers: [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9]
	I0717 20:03:12.635409 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.640445 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:12.640612 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:12.679701 1102136 cri.go:89] found id: ""
	I0717 20:03:12.679738 1102136 logs.go:284] 0 containers: []
	W0717 20:03:12.679748 1102136 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:12.679755 1102136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:12.679817 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:12.711772 1102136 cri.go:89] found id: "434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:12.711815 1102136 cri.go:89] found id: "cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:12.711822 1102136 cri.go:89] found id: ""
	I0717 20:03:12.711833 1102136 logs.go:284] 2 containers: [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379]
	I0717 20:03:12.711904 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.716354 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.720769 1102136 logs.go:123] Gathering logs for kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] ...
	I0717 20:03:12.720806 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:12.757719 1102136 logs.go:123] Gathering logs for kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] ...
	I0717 20:03:12.757766 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:12.804972 1102136 logs.go:123] Gathering logs for coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] ...
	I0717 20:03:12.805019 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:12.841021 1102136 logs.go:123] Gathering logs for kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] ...
	I0717 20:03:12.841055 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:12.890140 1102136 logs.go:123] Gathering logs for storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] ...
	I0717 20:03:12.890185 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:12.926177 1102136 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:12.926219 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:12.985838 1102136 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:12.985904 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:13.003223 1102136 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:13.003257 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:13.180312 1102136 logs.go:123] Gathering logs for kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] ...
	I0717 20:03:13.180361 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:13.234663 1102136 logs.go:123] Gathering logs for etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] ...
	I0717 20:03:13.234711 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:13.297008 1102136 logs.go:123] Gathering logs for storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] ...
	I0717 20:03:13.297065 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:13.335076 1102136 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:13.335110 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:10.919208 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:12.921588 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:11.958471 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:13.959630 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:12.893125 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:15.392702 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:13.901775 1102136 logs.go:123] Gathering logs for container status ...
	I0717 20:03:13.901828 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:16.451075 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:03:16.470892 1102136 api_server.go:72] duration metric: took 4m15.23519157s to wait for apiserver process to appear ...
	I0717 20:03:16.470922 1102136 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:03:16.470963 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:16.471033 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:16.515122 1102136 cri.go:89] found id: "eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:16.515151 1102136 cri.go:89] found id: ""
	I0717 20:03:16.515161 1102136 logs.go:284] 1 containers: [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3]
	I0717 20:03:16.515217 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.519734 1102136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:16.519828 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:16.552440 1102136 cri.go:89] found id: "4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:16.552491 1102136 cri.go:89] found id: ""
	I0717 20:03:16.552503 1102136 logs.go:284] 1 containers: [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc]
	I0717 20:03:16.552569 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.557827 1102136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:16.557935 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:16.598317 1102136 cri.go:89] found id: "63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:16.598344 1102136 cri.go:89] found id: ""
	I0717 20:03:16.598354 1102136 logs.go:284] 1 containers: [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce]
	I0717 20:03:16.598425 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.604234 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:16.604331 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:16.638321 1102136 cri.go:89] found id: "0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:16.638349 1102136 cri.go:89] found id: ""
	I0717 20:03:16.638360 1102136 logs.go:284] 1 containers: [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5]
	I0717 20:03:16.638429 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.642755 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:16.642840 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:16.681726 1102136 cri.go:89] found id: "c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:16.681763 1102136 cri.go:89] found id: ""
	I0717 20:03:16.681776 1102136 logs.go:284] 1 containers: [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a]
	I0717 20:03:16.681848 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.686317 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:16.686394 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:16.723303 1102136 cri.go:89] found id: "2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:16.723328 1102136 cri.go:89] found id: ""
	I0717 20:03:16.723337 1102136 logs.go:284] 1 containers: [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9]
	I0717 20:03:16.723387 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.727491 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:16.727586 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:16.756931 1102136 cri.go:89] found id: ""
	I0717 20:03:16.756960 1102136 logs.go:284] 0 containers: []
	W0717 20:03:16.756968 1102136 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:16.756975 1102136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:16.757036 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:16.788732 1102136 cri.go:89] found id: "434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:16.788819 1102136 cri.go:89] found id: "cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:16.788832 1102136 cri.go:89] found id: ""
	I0717 20:03:16.788845 1102136 logs.go:284] 2 containers: [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379]
	I0717 20:03:16.788913 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.793783 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.797868 1102136 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:16.797892 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:16.813545 1102136 logs.go:123] Gathering logs for etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] ...
	I0717 20:03:16.813603 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:16.865094 1102136 logs.go:123] Gathering logs for coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] ...
	I0717 20:03:16.865144 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:16.904821 1102136 logs.go:123] Gathering logs for kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] ...
	I0717 20:03:16.904869 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:16.945822 1102136 logs.go:123] Gathering logs for kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] ...
	I0717 20:03:16.945865 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:16.986531 1102136 logs.go:123] Gathering logs for storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] ...
	I0717 20:03:16.986580 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:17.023216 1102136 logs.go:123] Gathering logs for storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] ...
	I0717 20:03:17.023253 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:17.062491 1102136 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:17.062532 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:17.137024 1102136 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:17.137085 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:17.292825 1102136 logs.go:123] Gathering logs for kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] ...
	I0717 20:03:17.292881 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:17.345470 1102136 logs.go:123] Gathering logs for kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] ...
	I0717 20:03:17.345519 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:17.401262 1102136 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:17.401326 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:18.037384 1102136 logs.go:123] Gathering logs for container status ...
	I0717 20:03:18.037440 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:15.422242 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:17.011882 1102415 pod_ready.go:81] duration metric: took 4m0.000519116s waiting for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	E0717 20:03:17.011940 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:03:17.011951 1102415 pod_ready.go:38] duration metric: took 4m2.40035739s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:03:17.011974 1102415 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:03:17.012009 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:17.012082 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:17.072352 1102415 cri.go:89] found id: "210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:17.072381 1102415 cri.go:89] found id: ""
	I0717 20:03:17.072396 1102415 logs.go:284] 1 containers: [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a]
	I0717 20:03:17.072467 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.078353 1102415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:17.078432 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:17.122416 1102415 cri.go:89] found id: "bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:17.122455 1102415 cri.go:89] found id: ""
	I0717 20:03:17.122466 1102415 logs.go:284] 1 containers: [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2]
	I0717 20:03:17.122539 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.128311 1102415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:17.128394 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:17.166606 1102415 cri.go:89] found id: "cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:17.166637 1102415 cri.go:89] found id: ""
	I0717 20:03:17.166653 1102415 logs.go:284] 1 containers: [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524]
	I0717 20:03:17.166720 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.172605 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:17.172693 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:17.221109 1102415 cri.go:89] found id: "9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:17.221138 1102415 cri.go:89] found id: ""
	I0717 20:03:17.221149 1102415 logs.go:284] 1 containers: [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261]
	I0717 20:03:17.221216 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.226305 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:17.226394 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:17.271876 1102415 cri.go:89] found id: "76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:17.271902 1102415 cri.go:89] found id: ""
	I0717 20:03:17.271911 1102415 logs.go:284] 1 containers: [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951]
	I0717 20:03:17.271979 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.281914 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:17.282016 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:17.319258 1102415 cri.go:89] found id: "280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:17.319288 1102415 cri.go:89] found id: ""
	I0717 20:03:17.319309 1102415 logs.go:284] 1 containers: [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6]
	I0717 20:03:17.319376 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.323955 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:17.324102 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:17.357316 1102415 cri.go:89] found id: ""
	I0717 20:03:17.357355 1102415 logs.go:284] 0 containers: []
	W0717 20:03:17.357367 1102415 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:17.357375 1102415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:17.357458 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:17.409455 1102415 cri.go:89] found id: "19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:17.409553 1102415 cri.go:89] found id: "4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:17.409613 1102415 cri.go:89] found id: ""
	I0717 20:03:17.409626 1102415 logs.go:284] 2 containers: [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88]
	I0717 20:03:17.409706 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.417046 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.428187 1102415 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:17.428242 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:17.504409 1102415 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:17.504454 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:17.673502 1102415 logs.go:123] Gathering logs for etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] ...
	I0717 20:03:17.673576 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:17.728765 1102415 logs.go:123] Gathering logs for kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] ...
	I0717 20:03:17.728818 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:17.791192 1102415 logs.go:123] Gathering logs for container status ...
	I0717 20:03:17.791249 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:17.844883 1102415 logs.go:123] Gathering logs for storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] ...
	I0717 20:03:17.844944 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:17.891456 1102415 logs.go:123] Gathering logs for storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] ...
	I0717 20:03:17.891501 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:17.927018 1102415 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:17.927057 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:18.493310 1102415 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:18.493362 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:18.510255 1102415 logs.go:123] Gathering logs for kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] ...
	I0717 20:03:18.510302 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:18.558006 1102415 logs.go:123] Gathering logs for coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] ...
	I0717 20:03:18.558054 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:18.595130 1102415 logs.go:123] Gathering logs for kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] ...
	I0717 20:03:18.595166 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:18.636909 1102415 logs.go:123] Gathering logs for kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] ...
	I0717 20:03:18.636967 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:16.460091 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:18.959764 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:17.395341 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:19.891916 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:20.585703 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 20:03:20.591606 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 200:
	ok
	I0717 20:03:20.593225 1102136 api_server.go:141] control plane version: v1.27.3
	I0717 20:03:20.593249 1102136 api_server.go:131] duration metric: took 4.122320377s to wait for apiserver health ...
	I0717 20:03:20.593259 1102136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:03:20.593297 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:20.593391 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:20.636361 1102136 cri.go:89] found id: "eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:20.636401 1102136 cri.go:89] found id: ""
	I0717 20:03:20.636413 1102136 logs.go:284] 1 containers: [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3]
	I0717 20:03:20.636488 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.641480 1102136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:20.641622 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:20.674769 1102136 cri.go:89] found id: "4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:20.674791 1102136 cri.go:89] found id: ""
	I0717 20:03:20.674799 1102136 logs.go:284] 1 containers: [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc]
	I0717 20:03:20.674852 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.679515 1102136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:20.679587 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:20.717867 1102136 cri.go:89] found id: "63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:20.717914 1102136 cri.go:89] found id: ""
	I0717 20:03:20.717927 1102136 logs.go:284] 1 containers: [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce]
	I0717 20:03:20.717997 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.723020 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:20.723106 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:20.759930 1102136 cri.go:89] found id: "0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:20.759957 1102136 cri.go:89] found id: ""
	I0717 20:03:20.759968 1102136 logs.go:284] 1 containers: [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5]
	I0717 20:03:20.760032 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.764308 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:20.764378 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:20.804542 1102136 cri.go:89] found id: "c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:20.804570 1102136 cri.go:89] found id: ""
	I0717 20:03:20.804580 1102136 logs.go:284] 1 containers: [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a]
	I0717 20:03:20.804654 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.810036 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:20.810133 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:20.846655 1102136 cri.go:89] found id: "2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:20.846681 1102136 cri.go:89] found id: ""
	I0717 20:03:20.846689 1102136 logs.go:284] 1 containers: [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9]
	I0717 20:03:20.846745 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.853633 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:20.853741 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:20.886359 1102136 cri.go:89] found id: ""
	I0717 20:03:20.886393 1102136 logs.go:284] 0 containers: []
	W0717 20:03:20.886405 1102136 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:20.886413 1102136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:20.886489 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:20.924476 1102136 cri.go:89] found id: "434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:20.924508 1102136 cri.go:89] found id: "cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:20.924513 1102136 cri.go:89] found id: ""
	I0717 20:03:20.924524 1102136 logs.go:284] 2 containers: [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379]
	I0717 20:03:20.924576 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.929775 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.935520 1102136 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:20.935547 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:21.543605 1102136 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:21.543668 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:21.694696 1102136 logs.go:123] Gathering logs for kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] ...
	I0717 20:03:21.694763 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:21.736092 1102136 logs.go:123] Gathering logs for kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] ...
	I0717 20:03:21.736150 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:21.771701 1102136 logs.go:123] Gathering logs for kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] ...
	I0717 20:03:21.771749 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:21.822783 1102136 logs.go:123] Gathering logs for storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] ...
	I0717 20:03:21.822835 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:21.885797 1102136 logs.go:123] Gathering logs for storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] ...
	I0717 20:03:21.885851 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:21.930801 1102136 logs.go:123] Gathering logs for container status ...
	I0717 20:03:21.930842 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:21.985829 1102136 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:21.985862 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:22.056958 1102136 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:22.057010 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:22.074352 1102136 logs.go:123] Gathering logs for kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] ...
	I0717 20:03:22.074402 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:22.128386 1102136 logs.go:123] Gathering logs for etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] ...
	I0717 20:03:22.128437 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:22.188390 1102136 logs.go:123] Gathering logs for coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] ...
	I0717 20:03:22.188425 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:21.172413 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:03:21.194614 1102415 api_server.go:72] duration metric: took 4m13.166163785s to wait for apiserver process to appear ...
	I0717 20:03:21.194645 1102415 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:03:21.194687 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:21.194748 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:21.229142 1102415 cri.go:89] found id: "210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:21.229176 1102415 cri.go:89] found id: ""
	I0717 20:03:21.229186 1102415 logs.go:284] 1 containers: [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a]
	I0717 20:03:21.229255 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.234039 1102415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:21.234106 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:21.266482 1102415 cri.go:89] found id: "bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:21.266516 1102415 cri.go:89] found id: ""
	I0717 20:03:21.266527 1102415 logs.go:284] 1 containers: [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2]
	I0717 20:03:21.266596 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.271909 1102415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:21.271992 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:21.309830 1102415 cri.go:89] found id: "cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:21.309869 1102415 cri.go:89] found id: ""
	I0717 20:03:21.309878 1102415 logs.go:284] 1 containers: [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524]
	I0717 20:03:21.309943 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.314757 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:21.314838 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:21.356650 1102415 cri.go:89] found id: "9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:21.356681 1102415 cri.go:89] found id: ""
	I0717 20:03:21.356691 1102415 logs.go:284] 1 containers: [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261]
	I0717 20:03:21.356748 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.361582 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:21.361667 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:21.394956 1102415 cri.go:89] found id: "76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:21.394982 1102415 cri.go:89] found id: ""
	I0717 20:03:21.394994 1102415 logs.go:284] 1 containers: [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951]
	I0717 20:03:21.395056 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.400073 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:21.400143 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:21.441971 1102415 cri.go:89] found id: "280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:21.442004 1102415 cri.go:89] found id: ""
	I0717 20:03:21.442015 1102415 logs.go:284] 1 containers: [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6]
	I0717 20:03:21.442083 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.447189 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:21.447253 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:21.479477 1102415 cri.go:89] found id: ""
	I0717 20:03:21.479512 1102415 logs.go:284] 0 containers: []
	W0717 20:03:21.479524 1102415 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:21.479534 1102415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:21.479615 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:21.515474 1102415 cri.go:89] found id: "19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:21.515502 1102415 cri.go:89] found id: "4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:21.515510 1102415 cri.go:89] found id: ""
	I0717 20:03:21.515521 1102415 logs.go:284] 2 containers: [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88]
	I0717 20:03:21.515583 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.520398 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.525414 1102415 logs.go:123] Gathering logs for kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] ...
	I0717 20:03:21.525450 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:21.564455 1102415 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:21.564492 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:21.628081 1102415 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:21.628127 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:21.646464 1102415 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:21.646508 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:21.803148 1102415 logs.go:123] Gathering logs for kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] ...
	I0717 20:03:21.803205 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:21.856704 1102415 logs.go:123] Gathering logs for etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] ...
	I0717 20:03:21.856765 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:21.907860 1102415 logs.go:123] Gathering logs for coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] ...
	I0717 20:03:21.907912 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:21.953111 1102415 logs.go:123] Gathering logs for kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] ...
	I0717 20:03:21.953158 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:21.999947 1102415 logs.go:123] Gathering logs for kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] ...
	I0717 20:03:22.000008 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:22.061041 1102415 logs.go:123] Gathering logs for storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] ...
	I0717 20:03:22.061078 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:22.103398 1102415 logs.go:123] Gathering logs for storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] ...
	I0717 20:03:22.103432 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:22.141810 1102415 logs.go:123] Gathering logs for container status ...
	I0717 20:03:22.141864 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:22.186692 1102415 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:22.186726 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:24.737179 1102136 system_pods.go:59] 8 kube-system pods found
	I0717 20:03:24.737218 1102136 system_pods.go:61] "coredns-5d78c9869d-9mxdj" [fbff09fd-436d-4208-9187-b6312aa1c223] Running
	I0717 20:03:24.737225 1102136 system_pods.go:61] "etcd-no-preload-408472" [7125f19d-c1ed-4b1f-99be-207bbf5d8c70] Running
	I0717 20:03:24.737231 1102136 system_pods.go:61] "kube-apiserver-no-preload-408472" [0e54eaed-daad-434a-ad3b-96f7fb924099] Running
	I0717 20:03:24.737238 1102136 system_pods.go:61] "kube-controller-manager-no-preload-408472" [38ee8079-8142-45a9-8d5a-4abbc5c8bb3b] Running
	I0717 20:03:24.737243 1102136 system_pods.go:61] "kube-proxy-cntdn" [8653567b-abf9-468c-a030-45fc53fa0cc2] Running
	I0717 20:03:24.737248 1102136 system_pods.go:61] "kube-scheduler-no-preload-408472" [e51560a1-c1b0-407c-8635-df512bd033b5] Running
	I0717 20:03:24.737258 1102136 system_pods.go:61] "metrics-server-74d5c6b9c-hnngh" [dfff837e-dbba-4795-935d-9562d2744169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:24.737269 1102136 system_pods.go:61] "storage-provisioner" [1aefd8ef-dec9-4e37-8648-8e5a62622cd3] Running
	I0717 20:03:24.737278 1102136 system_pods.go:74] duration metric: took 4.144012317s to wait for pod list to return data ...
	I0717 20:03:24.737290 1102136 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:03:24.741216 1102136 default_sa.go:45] found service account: "default"
	I0717 20:03:24.741262 1102136 default_sa.go:55] duration metric: took 3.961044ms for default service account to be created ...
	I0717 20:03:24.741275 1102136 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:03:24.749060 1102136 system_pods.go:86] 8 kube-system pods found
	I0717 20:03:24.749094 1102136 system_pods.go:89] "coredns-5d78c9869d-9mxdj" [fbff09fd-436d-4208-9187-b6312aa1c223] Running
	I0717 20:03:24.749100 1102136 system_pods.go:89] "etcd-no-preload-408472" [7125f19d-c1ed-4b1f-99be-207bbf5d8c70] Running
	I0717 20:03:24.749104 1102136 system_pods.go:89] "kube-apiserver-no-preload-408472" [0e54eaed-daad-434a-ad3b-96f7fb924099] Running
	I0717 20:03:24.749109 1102136 system_pods.go:89] "kube-controller-manager-no-preload-408472" [38ee8079-8142-45a9-8d5a-4abbc5c8bb3b] Running
	I0717 20:03:24.749113 1102136 system_pods.go:89] "kube-proxy-cntdn" [8653567b-abf9-468c-a030-45fc53fa0cc2] Running
	I0717 20:03:24.749117 1102136 system_pods.go:89] "kube-scheduler-no-preload-408472" [e51560a1-c1b0-407c-8635-df512bd033b5] Running
	I0717 20:03:24.749125 1102136 system_pods.go:89] "metrics-server-74d5c6b9c-hnngh" [dfff837e-dbba-4795-935d-9562d2744169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:24.749139 1102136 system_pods.go:89] "storage-provisioner" [1aefd8ef-dec9-4e37-8648-8e5a62622cd3] Running
	I0717 20:03:24.749147 1102136 system_pods.go:126] duration metric: took 7.865246ms to wait for k8s-apps to be running ...
	I0717 20:03:24.749155 1102136 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:03:24.749215 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:03:24.765460 1102136 system_svc.go:56] duration metric: took 16.294048ms WaitForService to wait for kubelet.
	I0717 20:03:24.765503 1102136 kubeadm.go:581] duration metric: took 4m23.529814054s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:03:24.765587 1102136 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:03:24.769332 1102136 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:03:24.769368 1102136 node_conditions.go:123] node cpu capacity is 2
	I0717 20:03:24.769381 1102136 node_conditions.go:105] duration metric: took 3.788611ms to run NodePressure ...
	I0717 20:03:24.769392 1102136 start.go:228] waiting for startup goroutines ...
	I0717 20:03:24.769397 1102136 start.go:233] waiting for cluster config update ...
	I0717 20:03:24.769408 1102136 start.go:242] writing updated cluster config ...
	I0717 20:03:24.769830 1102136 ssh_runner.go:195] Run: rm -f paused
	I0717 20:03:24.827845 1102136 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:03:24.830624 1102136 out.go:177] * Done! kubectl is now configured to use "no-preload-408472" cluster and "default" namespace by default
	I0717 20:03:20.960575 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:23.458710 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:25.465429 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:21.893446 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:24.393335 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:26.393858 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:25.243410 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 20:03:25.250670 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 200:
	ok
	I0717 20:03:25.252086 1102415 api_server.go:141] control plane version: v1.27.3
	I0717 20:03:25.252111 1102415 api_server.go:131] duration metric: took 4.0574608s to wait for apiserver health ...
	I0717 20:03:25.252121 1102415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:03:25.252146 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:25.252197 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:25.286754 1102415 cri.go:89] found id: "210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:25.286785 1102415 cri.go:89] found id: ""
	I0717 20:03:25.286795 1102415 logs.go:284] 1 containers: [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a]
	I0717 20:03:25.286867 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.292653 1102415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:25.292733 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:25.328064 1102415 cri.go:89] found id: "bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:25.328092 1102415 cri.go:89] found id: ""
	I0717 20:03:25.328101 1102415 logs.go:284] 1 containers: [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2]
	I0717 20:03:25.328170 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.333727 1102415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:25.333798 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:25.368132 1102415 cri.go:89] found id: "cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:25.368159 1102415 cri.go:89] found id: ""
	I0717 20:03:25.368167 1102415 logs.go:284] 1 containers: [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524]
	I0717 20:03:25.368245 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.373091 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:25.373197 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:25.414136 1102415 cri.go:89] found id: "9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:25.414165 1102415 cri.go:89] found id: ""
	I0717 20:03:25.414175 1102415 logs.go:284] 1 containers: [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261]
	I0717 20:03:25.414229 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.424603 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:25.424679 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:25.470289 1102415 cri.go:89] found id: "76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:25.470320 1102415 cri.go:89] found id: ""
	I0717 20:03:25.470331 1102415 logs.go:284] 1 containers: [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951]
	I0717 20:03:25.470401 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.476760 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:25.476851 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:25.511350 1102415 cri.go:89] found id: "280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:25.511379 1102415 cri.go:89] found id: ""
	I0717 20:03:25.511390 1102415 logs.go:284] 1 containers: [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6]
	I0717 20:03:25.511459 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.516259 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:25.516339 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:25.553868 1102415 cri.go:89] found id: ""
	I0717 20:03:25.553913 1102415 logs.go:284] 0 containers: []
	W0717 20:03:25.553925 1102415 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:25.553932 1102415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:25.554025 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:25.589810 1102415 cri.go:89] found id: "19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:25.589844 1102415 cri.go:89] found id: "4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:25.589851 1102415 cri.go:89] found id: ""
	I0717 20:03:25.589862 1102415 logs.go:284] 2 containers: [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88]
	I0717 20:03:25.589924 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.594968 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.598953 1102415 logs.go:123] Gathering logs for kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] ...
	I0717 20:03:25.598977 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:25.640632 1102415 logs.go:123] Gathering logs for kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] ...
	I0717 20:03:25.640678 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:25.692768 1102415 logs.go:123] Gathering logs for storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] ...
	I0717 20:03:25.692812 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:25.728461 1102415 logs.go:123] Gathering logs for container status ...
	I0717 20:03:25.728500 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:25.779239 1102415 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:25.779278 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:25.794738 1102415 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:25.794790 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:25.966972 1102415 logs.go:123] Gathering logs for etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] ...
	I0717 20:03:25.967016 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:26.017430 1102415 logs.go:123] Gathering logs for coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] ...
	I0717 20:03:26.017467 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:26.053983 1102415 logs.go:123] Gathering logs for kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] ...
	I0717 20:03:26.054017 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:26.092510 1102415 logs.go:123] Gathering logs for storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] ...
	I0717 20:03:26.092544 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:26.127038 1102415 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:26.127071 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:26.728858 1102415 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:26.728911 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:26.792099 1102415 logs.go:123] Gathering logs for kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] ...
	I0717 20:03:26.792146 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:29.360633 1102415 system_pods.go:59] 8 kube-system pods found
	I0717 20:03:29.360678 1102415 system_pods.go:61] "coredns-5d78c9869d-rjqsv" [f27e2de9-9849-40e2-b6dc-1ee27537b1e6] Running
	I0717 20:03:29.360686 1102415 system_pods.go:61] "etcd-default-k8s-diff-port-711413" [15f74e3f-61a1-4464-bbff-d336f6df4b6e] Running
	I0717 20:03:29.360694 1102415 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-711413" [c164ab32-6a1d-4079-9b58-7da96eabc60e] Running
	I0717 20:03:29.360701 1102415 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-711413" [7dc149c8-be03-4fd6-b945-c63e95a28470] Running
	I0717 20:03:29.360708 1102415 system_pods.go:61] "kube-proxy-9qfpg" [ecb84bb9-57a2-4a42-8104-b792d38479ca] Running
	I0717 20:03:29.360714 1102415 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-711413" [f2cb374f-674e-4a08-82e0-0932b732b485] Running
	I0717 20:03:29.360727 1102415 system_pods.go:61] "metrics-server-74d5c6b9c-hzcd7" [17e01399-9910-4f01-abe7-3eae271af1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:29.360745 1102415 system_pods.go:61] "storage-provisioner" [43705714-97f1-4b06-8eeb-04d60c22112a] Running
	I0717 20:03:29.360755 1102415 system_pods.go:74] duration metric: took 4.108627852s to wait for pod list to return data ...
	I0717 20:03:29.360764 1102415 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:03:29.364887 1102415 default_sa.go:45] found service account: "default"
	I0717 20:03:29.364918 1102415 default_sa.go:55] duration metric: took 4.142278ms for default service account to be created ...
	I0717 20:03:29.364927 1102415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:03:29.372734 1102415 system_pods.go:86] 8 kube-system pods found
	I0717 20:03:29.372774 1102415 system_pods.go:89] "coredns-5d78c9869d-rjqsv" [f27e2de9-9849-40e2-b6dc-1ee27537b1e6] Running
	I0717 20:03:29.372783 1102415 system_pods.go:89] "etcd-default-k8s-diff-port-711413" [15f74e3f-61a1-4464-bbff-d336f6df4b6e] Running
	I0717 20:03:29.372791 1102415 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-711413" [c164ab32-6a1d-4079-9b58-7da96eabc60e] Running
	I0717 20:03:29.372799 1102415 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-711413" [7dc149c8-be03-4fd6-b945-c63e95a28470] Running
	I0717 20:03:29.372806 1102415 system_pods.go:89] "kube-proxy-9qfpg" [ecb84bb9-57a2-4a42-8104-b792d38479ca] Running
	I0717 20:03:29.372813 1102415 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-711413" [f2cb374f-674e-4a08-82e0-0932b732b485] Running
	I0717 20:03:29.372824 1102415 system_pods.go:89] "metrics-server-74d5c6b9c-hzcd7" [17e01399-9910-4f01-abe7-3eae271af1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:29.372832 1102415 system_pods.go:89] "storage-provisioner" [43705714-97f1-4b06-8eeb-04d60c22112a] Running
	I0717 20:03:29.372843 1102415 system_pods.go:126] duration metric: took 7.908204ms to wait for k8s-apps to be running ...
	I0717 20:03:29.372857 1102415 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:03:29.372916 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:03:29.393783 1102415 system_svc.go:56] duration metric: took 20.914205ms WaitForService to wait for kubelet.
	I0717 20:03:29.393821 1102415 kubeadm.go:581] duration metric: took 4m21.365424408s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:03:29.393853 1102415 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:03:29.398018 1102415 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:03:29.398052 1102415 node_conditions.go:123] node cpu capacity is 2
	I0717 20:03:29.398064 1102415 node_conditions.go:105] duration metric: took 4.205596ms to run NodePressure ...
	I0717 20:03:29.398076 1102415 start.go:228] waiting for startup goroutines ...
	I0717 20:03:29.398082 1102415 start.go:233] waiting for cluster config update ...
	I0717 20:03:29.398102 1102415 start.go:242] writing updated cluster config ...
	I0717 20:03:29.398468 1102415 ssh_runner.go:195] Run: rm -f paused
	I0717 20:03:29.454497 1102415 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:03:29.457512 1102415 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-711413" cluster and "default" namespace by default
	I0717 20:03:27.959261 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:30.460004 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:28.394465 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:30.892361 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:32.957801 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:34.958305 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:32.892903 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:35.392748 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:36.958526 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:38.958779 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:37.393705 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:39.892551 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:41.458525 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:42.402712 1103141 pod_ready.go:81] duration metric: took 4m0.00015085s waiting for pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace to be "Ready" ...
	E0717 20:03:42.402748 1103141 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:03:42.402774 1103141 pod_ready.go:38] duration metric: took 4m10.235484044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:03:42.402819 1103141 kubeadm.go:640] restartCluster took 4m30.682189828s
	W0717 20:03:42.402887 1103141 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 20:03:42.402946 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 20:03:42.393799 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:44.394199 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:46.892897 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:48.895295 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:51.394267 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:53.894027 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:56.393652 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:58.896895 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:01.393396 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:03.892923 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:05.894423 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:08.394591 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:10.893136 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:14.851948 1103141 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.44897498s)
	I0717 20:04:14.852044 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:04:14.868887 1103141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:04:14.879707 1103141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:04:14.890657 1103141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:04:14.890724 1103141 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 20:04:14.961576 1103141 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 20:04:14.961661 1103141 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:04:15.128684 1103141 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:04:15.128835 1103141 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:04:15.128966 1103141 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:04:15.334042 1103141 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:04:15.336736 1103141 out.go:204]   - Generating certificates and keys ...
	I0717 20:04:15.336885 1103141 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:04:15.336966 1103141 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:04:15.337097 1103141 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 20:04:15.337201 1103141 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 20:04:15.337312 1103141 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 20:04:15.337393 1103141 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 20:04:15.337769 1103141 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 20:04:15.338490 1103141 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 20:04:15.338931 1103141 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 20:04:15.339490 1103141 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 20:04:15.339994 1103141 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 20:04:15.340076 1103141 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:04:15.714920 1103141 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:04:15.892169 1103141 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:04:16.203610 1103141 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:04:16.346085 1103141 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:04:16.364315 1103141 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:04:16.365521 1103141 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:04:16.366077 1103141 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 20:04:16.503053 1103141 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:04:13.393067 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:15.394199 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:16.505772 1103141 out.go:204]   - Booting up control plane ...
	I0717 20:04:16.505925 1103141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:04:16.506056 1103141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:04:16.511321 1103141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:04:16.513220 1103141 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:04:16.516069 1103141 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 20:04:17.892626 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:19.893760 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:25.520496 1103141 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003077 seconds
	I0717 20:04:25.520676 1103141 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 20:04:25.541790 1103141 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 20:04:26.093172 1103141 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 20:04:26.093446 1103141 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-114855 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 20:04:26.614680 1103141 kubeadm.go:322] [bootstrap-token] Using token: nbkipc.s1xu11jkn2pd9jvz
	I0717 20:04:22.393296 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:24.395001 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:26.617034 1103141 out.go:204]   - Configuring RBAC rules ...
	I0717 20:04:26.617210 1103141 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 20:04:26.625795 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 20:04:26.645311 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 20:04:26.650977 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 20:04:26.656523 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 20:04:26.662996 1103141 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 20:04:26.691726 1103141 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 20:04:26.969700 1103141 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 20:04:27.038459 1103141 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 20:04:27.039601 1103141 kubeadm.go:322] 
	I0717 20:04:27.039723 1103141 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 20:04:27.039753 1103141 kubeadm.go:322] 
	I0717 20:04:27.039848 1103141 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 20:04:27.039857 1103141 kubeadm.go:322] 
	I0717 20:04:27.039879 1103141 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 20:04:27.039945 1103141 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 20:04:27.040023 1103141 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 20:04:27.040036 1103141 kubeadm.go:322] 
	I0717 20:04:27.040114 1103141 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 20:04:27.040123 1103141 kubeadm.go:322] 
	I0717 20:04:27.040192 1103141 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 20:04:27.040202 1103141 kubeadm.go:322] 
	I0717 20:04:27.040302 1103141 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 20:04:27.040419 1103141 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 20:04:27.040533 1103141 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 20:04:27.040543 1103141 kubeadm.go:322] 
	I0717 20:04:27.040653 1103141 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 20:04:27.040780 1103141 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 20:04:27.040792 1103141 kubeadm.go:322] 
	I0717 20:04:27.040917 1103141 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nbkipc.s1xu11jkn2pd9jvz \
	I0717 20:04:27.041051 1103141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 20:04:27.041083 1103141 kubeadm.go:322] 	--control-plane 
	I0717 20:04:27.041093 1103141 kubeadm.go:322] 
	I0717 20:04:27.041196 1103141 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 20:04:27.041200 1103141 kubeadm.go:322] 
	I0717 20:04:27.041276 1103141 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nbkipc.s1xu11jkn2pd9jvz \
	I0717 20:04:27.041420 1103141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 20:04:27.042440 1103141 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 20:04:27.042466 1103141 cni.go:84] Creating CNI manager for ""
	I0717 20:04:27.042512 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 20:04:27.046805 1103141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 20:04:27.049084 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 20:04:27.115952 1103141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 20:04:27.155521 1103141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 20:04:27.155614 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:27.155620 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=embed-certs-114855 minikube.k8s.io/updated_at=2023_07_17T20_04_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:27.604520 1103141 ops.go:34] apiserver oom_adj: -16
	I0717 20:04:27.604687 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:28.204384 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:28.703799 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:29.203981 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:29.703475 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:30.204062 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:30.703323 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:26.892819 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:28.895201 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:31.393384 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:31.204070 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:31.704206 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:32.204069 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:32.704193 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:33.203936 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:33.703692 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:34.203584 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:34.704039 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:35.204118 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:35.703385 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:33.893262 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:33.985163 1101908 pod_ready.go:81] duration metric: took 4m0.000422638s waiting for pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace to be "Ready" ...
	E0717 20:04:33.985205 1101908 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:04:33.985241 1101908 pod_ready.go:38] duration metric: took 4m1.200649003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:04:33.985298 1101908 kubeadm.go:640] restartCluster took 4m55.488257482s
	W0717 20:04:33.985385 1101908 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 20:04:33.985432 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 20:04:36.203827 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:36.703377 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:37.203981 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:37.703376 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:38.203498 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:38.703751 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:39.204099 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:39.704172 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:39.830734 1103141 kubeadm.go:1081] duration metric: took 12.675193605s to wait for elevateKubeSystemPrivileges.
	I0717 20:04:39.830771 1103141 kubeadm.go:406] StartCluster complete in 5m28.184955104s
	I0717 20:04:39.830796 1103141 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:04:39.830918 1103141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:04:39.833157 1103141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:04:39.834602 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 20:04:39.834801 1103141 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:04:39.834815 1103141 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 20:04:39.835031 1103141 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-114855"
	I0717 20:04:39.835054 1103141 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-114855"
	W0717 20:04:39.835062 1103141 addons.go:240] addon storage-provisioner should already be in state true
	I0717 20:04:39.835120 1103141 host.go:66] Checking if "embed-certs-114855" exists ...
	I0717 20:04:39.835243 1103141 addons.go:69] Setting default-storageclass=true in profile "embed-certs-114855"
	I0717 20:04:39.835240 1103141 addons.go:69] Setting metrics-server=true in profile "embed-certs-114855"
	I0717 20:04:39.835265 1103141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-114855"
	I0717 20:04:39.835268 1103141 addons.go:231] Setting addon metrics-server=true in "embed-certs-114855"
	W0717 20:04:39.835277 1103141 addons.go:240] addon metrics-server should already be in state true
	I0717 20:04:39.835324 1103141 host.go:66] Checking if "embed-certs-114855" exists ...
	I0717 20:04:39.835732 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.835742 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.835801 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.835831 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.835799 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.835916 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.855470 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0717 20:04:39.855482 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35595
	I0717 20:04:39.855481 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0717 20:04:39.856035 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.856107 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.856127 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.856776 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.856802 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.856872 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.856886 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.856937 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.856967 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.857216 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.857328 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.857353 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.857979 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.858022 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.858249 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.858296 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.858559 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.868852 1103141 addons.go:231] Setting addon default-storageclass=true in "embed-certs-114855"
	W0717 20:04:39.868889 1103141 addons.go:240] addon default-storageclass should already be in state true
	I0717 20:04:39.868930 1103141 host.go:66] Checking if "embed-certs-114855" exists ...
	I0717 20:04:39.869376 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.869426 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.877028 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37179
	I0717 20:04:39.877916 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.878347 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
	I0717 20:04:39.878690 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.878713 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.879085 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.879732 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.879754 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.879765 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.879950 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.880175 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.880381 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.882729 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 20:04:39.885818 1103141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:04:39.883284 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 20:04:39.888145 1103141 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:04:39.888171 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 20:04:39.888202 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 20:04:39.891651 1103141 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 20:04:39.893769 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 20:04:39.893066 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.893799 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 20:04:39.893831 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 20:04:39.893840 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 20:04:39.893879 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.894206 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 20:04:39.894454 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 20:04:39.894689 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 20:04:39.894878 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 20:04:39.895562 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0717 20:04:39.896172 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.896799 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.896825 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.897316 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.897969 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.898007 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.898778 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.899616 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 20:04:39.899645 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.899895 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 20:04:39.900193 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 20:04:39.900575 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 20:04:39.900770 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 20:04:39.915966 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I0717 20:04:39.916539 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.917101 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.917123 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.917530 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.917816 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.919631 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 20:04:39.919916 1103141 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 20:04:39.919936 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 20:04:39.919957 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 20:04:39.926132 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.926487 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 20:04:39.926520 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.926779 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 20:04:39.927115 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 20:04:39.927327 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 20:04:39.927522 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 20:04:40.077079 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 20:04:40.077106 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 20:04:40.084344 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 20:04:40.114809 1103141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:04:40.123795 1103141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 20:04:40.149950 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 20:04:40.149977 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 20:04:40.222818 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:04:40.222855 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 20:04:40.290773 1103141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:04:40.464132 1103141 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-114855" context rescaled to 1 replicas
	I0717 20:04:40.464182 1103141 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:04:40.468285 1103141 out.go:177] * Verifying Kubernetes components...
	I0717 20:04:40.470824 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:04:42.565704 1103141 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.481305344s)
	I0717 20:04:42.565749 1103141 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 20:04:43.290667 1103141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.175803142s)
	I0717 20:04:43.290744 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.290759 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.290778 1103141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.166947219s)
	I0717 20:04:43.290822 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.290840 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.291087 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.291217 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291225 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291238 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291241 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291254 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.291261 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.291268 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.291272 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.291613 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.291662 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291671 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291732 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.291756 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291764 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291775 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.291784 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.292436 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.292456 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.292471 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.439222 1103141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.148389848s)
	I0717 20:04:43.439268 1103141 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.968393184s)
	I0717 20:04:43.439310 1103141 node_ready.go:35] waiting up to 6m0s for node "embed-certs-114855" to be "Ready" ...
	I0717 20:04:43.439357 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.439401 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.439784 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.439806 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.439863 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.439932 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.440202 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.440220 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.440226 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.440232 1103141 addons.go:467] Verifying addon metrics-server=true in "embed-certs-114855"
	I0717 20:04:43.443066 1103141 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 20:04:43.445240 1103141 addons.go:502] enable addons completed in 3.610419127s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 20:04:43.494952 1103141 node_ready.go:49] node "embed-certs-114855" has status "Ready":"True"
	I0717 20:04:43.495002 1103141 node_ready.go:38] duration metric: took 55.676022ms waiting for node "embed-certs-114855" to be "Ready" ...
	I0717 20:04:43.495017 1103141 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:04:43.579632 1103141 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-9dkzb" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.676633 1103141 pod_ready.go:92] pod "coredns-5d78c9869d-9dkzb" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.676664 1103141 pod_ready.go:81] duration metric: took 1.096981736s waiting for pod "coredns-5d78c9869d-9dkzb" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.676677 1103141 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-gq2b2" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.683019 1103141 pod_ready.go:92] pod "coredns-5d78c9869d-gq2b2" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.683061 1103141 pod_ready.go:81] duration metric: took 6.376086ms waiting for pod "coredns-5d78c9869d-gq2b2" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.683077 1103141 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.691140 1103141 pod_ready.go:92] pod "etcd-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.691166 1103141 pod_ready.go:81] duration metric: took 8.082867ms waiting for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.691180 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.713413 1103141 pod_ready.go:92] pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.713448 1103141 pod_ready.go:81] duration metric: took 22.261351ms waiting for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.713462 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.728761 1103141 pod_ready.go:92] pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.728797 1103141 pod_ready.go:81] duration metric: took 15.326363ms waiting for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.728813 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bfvnl" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.583863 1103141 pod_ready.go:92] pod "kube-proxy-bfvnl" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:45.583901 1103141 pod_ready.go:81] duration metric: took 855.078548ms waiting for pod "kube-proxy-bfvnl" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.583915 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.867684 1103141 pod_ready.go:92] pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:45.867719 1103141 pod_ready.go:81] duration metric: took 283.796193ms waiting for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.867735 1103141 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:48.274479 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:50.278380 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:52.775046 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:54.775545 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:56.776685 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:59.275966 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:57.110722 1101908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (23.125251743s)
	I0717 20:04:57.110813 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:04:57.124991 1101908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:04:57.136828 1101908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:04:57.146898 1101908 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:04:57.146965 1101908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0717 20:04:57.390116 1101908 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 20:05:01.281623 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:03.776009 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:10.335351 1101908 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 20:05:10.335447 1101908 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:05:10.335566 1101908 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:05:10.335703 1101908 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:05:10.335829 1101908 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:05:10.335949 1101908 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:05:10.336064 1101908 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:05:10.336135 1101908 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 20:05:10.336220 1101908 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:05:10.338257 1101908 out.go:204]   - Generating certificates and keys ...
	I0717 20:05:10.338354 1101908 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:05:10.338443 1101908 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:05:10.338558 1101908 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 20:05:10.338681 1101908 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 20:05:10.338792 1101908 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 20:05:10.338855 1101908 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 20:05:10.338950 1101908 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 20:05:10.339044 1101908 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 20:05:10.339160 1101908 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 20:05:10.339264 1101908 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 20:05:10.339326 1101908 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 20:05:10.339403 1101908 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:05:10.339477 1101908 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:05:10.339556 1101908 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:05:10.339650 1101908 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:05:10.339727 1101908 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:05:10.339820 1101908 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:05:10.341550 1101908 out.go:204]   - Booting up control plane ...
	I0717 20:05:10.341674 1101908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:05:10.341797 1101908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:05:10.341892 1101908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:05:10.341982 1101908 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:05:10.342180 1101908 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 20:05:10.342290 1101908 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005656 seconds
	I0717 20:05:10.342399 1101908 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 20:05:10.342515 1101908 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 20:05:10.342582 1101908 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 20:05:10.342742 1101908 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-149000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 20:05:10.342830 1101908 kubeadm.go:322] [bootstrap-token] Using token: ki6f1y.fknzxf03oj84iyat
	I0717 20:05:10.344845 1101908 out.go:204]   - Configuring RBAC rules ...
	I0717 20:05:10.344980 1101908 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 20:05:10.345153 1101908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 20:05:10.345318 1101908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 20:05:10.345473 1101908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 20:05:10.345600 1101908 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 20:05:10.345664 1101908 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 20:05:10.345739 1101908 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 20:05:10.345750 1101908 kubeadm.go:322] 
	I0717 20:05:10.345834 1101908 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 20:05:10.345843 1101908 kubeadm.go:322] 
	I0717 20:05:10.345939 1101908 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 20:05:10.345947 1101908 kubeadm.go:322] 
	I0717 20:05:10.345983 1101908 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 20:05:10.346067 1101908 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 20:05:10.346139 1101908 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 20:05:10.346148 1101908 kubeadm.go:322] 
	I0717 20:05:10.346248 1101908 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 20:05:10.346356 1101908 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 20:05:10.346470 1101908 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 20:05:10.346480 1101908 kubeadm.go:322] 
	I0717 20:05:10.346588 1101908 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0717 20:05:10.346686 1101908 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 20:05:10.346695 1101908 kubeadm.go:322] 
	I0717 20:05:10.346821 1101908 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ki6f1y.fknzxf03oj84iyat \
	I0717 20:05:10.346997 1101908 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 20:05:10.347033 1101908 kubeadm.go:322]     --control-plane 	  
	I0717 20:05:10.347042 1101908 kubeadm.go:322] 
	I0717 20:05:10.347152 1101908 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 20:05:10.347161 1101908 kubeadm.go:322] 
	I0717 20:05:10.347260 1101908 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ki6f1y.fknzxf03oj84iyat \
	I0717 20:05:10.347429 1101908 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 20:05:10.347449 1101908 cni.go:84] Creating CNI manager for ""
	I0717 20:05:10.347463 1101908 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 20:05:10.349875 1101908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 20:05:06.284772 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:08.777303 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:10.351592 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 20:05:10.370891 1101908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 20:05:10.395381 1101908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 20:05:10.395477 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=old-k8s-version-149000 minikube.k8s.io/updated_at=2023_07_17T20_05_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:10.395473 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:10.663627 1101908 ops.go:34] apiserver oom_adj: -16
	I0717 20:05:10.663730 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:11.311991 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:11.812120 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:11.275701 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:13.277070 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:12.312047 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:12.811579 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:13.311876 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:13.811911 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:14.311514 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:14.811938 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:15.312088 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:15.812089 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:16.312164 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:16.812065 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:15.776961 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:17.778204 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:20.275642 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:17.312322 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:17.811428 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:18.312070 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:18.812245 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:19.311363 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:19.811909 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:20.311343 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:20.811869 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:21.311974 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:21.811429 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:22.311474 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:22.811809 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:23.311574 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:23.812246 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:24.312115 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:24.812132 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:25.311694 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:25.457162 1101908 kubeadm.go:1081] duration metric: took 15.061765556s to wait for elevateKubeSystemPrivileges.
	I0717 20:05:25.457213 1101908 kubeadm.go:406] StartCluster complete in 5m47.004786394s
	I0717 20:05:25.457273 1101908 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:05:25.457431 1101908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:05:25.459593 1101908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:05:25.459942 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 20:05:25.460139 1101908 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 20:05:25.460267 1101908 config.go:182] Loaded profile config "old-k8s-version-149000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 20:05:25.460272 1101908 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-149000"
	I0717 20:05:25.460409 1101908 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-149000"
	W0717 20:05:25.460419 1101908 addons.go:240] addon storage-provisioner should already be in state true
	I0717 20:05:25.460516 1101908 host.go:66] Checking if "old-k8s-version-149000" exists ...
	I0717 20:05:25.460284 1101908 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-149000"
	I0717 20:05:25.460709 1101908 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-149000"
	W0717 20:05:25.460727 1101908 addons.go:240] addon metrics-server should already be in state true
	I0717 20:05:25.460294 1101908 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-149000"
	I0717 20:05:25.460771 1101908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-149000"
	I0717 20:05:25.460793 1101908 host.go:66] Checking if "old-k8s-version-149000" exists ...
	I0717 20:05:25.461033 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.461061 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.461100 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.461128 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.461201 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.461227 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.487047 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0717 20:05:25.487091 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44607
	I0717 20:05:25.487066 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I0717 20:05:25.487833 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.487898 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.487930 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.488571 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.488595 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.488597 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.488615 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.488632 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.488660 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.489058 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.489074 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.489135 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.489284 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.489635 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.489641 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.489654 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.489657 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.498029 1101908 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-149000"
	W0717 20:05:25.498058 1101908 addons.go:240] addon default-storageclass should already be in state true
	I0717 20:05:25.498092 1101908 host.go:66] Checking if "old-k8s-version-149000" exists ...
	I0717 20:05:25.498485 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.498527 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.506931 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0717 20:05:25.507478 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.508080 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.508109 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.508562 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.508845 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.510969 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 20:05:25.513078 1101908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:05:25.511340 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0717 20:05:25.515599 1101908 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:05:25.515626 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 20:05:25.515655 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 20:05:25.516012 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.516682 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.516709 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.517198 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.517438 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.519920 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.520835 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 20:05:25.521176 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 20:05:25.521204 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.523226 1101908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 20:05:22.775399 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:25.278740 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:25.521305 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 20:05:25.523448 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38723
	I0717 20:05:25.525260 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 20:05:25.525280 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 20:05:25.525310 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 20:05:25.525529 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 20:05:25.526263 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 20:05:25.526597 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 20:05:25.527369 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.528329 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.528357 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.528696 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.528792 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.529350 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.529381 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.529649 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 20:05:25.529655 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 20:05:25.529674 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.529823 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 20:05:25.529949 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 20:05:25.530088 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 20:05:25.552954 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I0717 20:05:25.553470 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.554117 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.554145 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.554521 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.554831 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.556872 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 20:05:25.557158 1101908 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 20:05:25.557183 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 20:05:25.557204 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 20:05:25.560114 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.560622 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 20:05:25.560656 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.561095 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 20:05:25.561350 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 20:05:25.561512 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 20:05:25.561749 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 20:05:25.724163 1101908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:05:25.749198 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 20:05:25.749231 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 20:05:25.754533 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 20:05:25.757518 1101908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 20:05:25.811831 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 20:05:25.811867 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 20:05:25.893143 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:05:25.893175 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 20:05:25.994781 1101908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:05:26.019864 1101908 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-149000" context rescaled to 1 replicas
	I0717 20:05:26.019914 1101908 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.177 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:05:26.022777 1101908 out.go:177] * Verifying Kubernetes components...
	I0717 20:05:26.025694 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:05:27.100226 1101908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.376005593s)
	I0717 20:05:27.100282 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100295 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.100306 1101908 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.345727442s)
	I0717 20:05:27.100343 1101908 start.go:917] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0717 20:05:27.100360 1101908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.342808508s)
	I0717 20:05:27.100411 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100426 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.100781 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.100799 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.100810 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100821 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.100866 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.100877 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.100876 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.100885 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100894 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.101035 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.101065 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.101100 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.101154 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.101170 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.101185 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.101195 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.101423 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.101441 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.101448 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.169038 1101908 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.143298277s)
	I0717 20:05:27.169095 1101908 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-149000" to be "Ready" ...
	I0717 20:05:27.169044 1101908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.174211865s)
	I0717 20:05:27.169278 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.169333 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.169672 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.169782 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.169814 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.169837 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.169758 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.171950 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.171960 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.171979 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.171992 1101908 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-149000"
	I0717 20:05:27.174411 1101908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 20:05:27.777543 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:30.276174 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:27.176695 1101908 addons.go:502] enable addons completed in 1.716545434s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 20:05:27.191392 1101908 node_ready.go:49] node "old-k8s-version-149000" has status "Ready":"True"
	I0717 20:05:27.191435 1101908 node_ready.go:38] duration metric: took 22.324367ms waiting for node "old-k8s-version-149000" to be "Ready" ...
	I0717 20:05:27.191450 1101908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:05:27.203011 1101908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:29.214694 1101908 pod_ready.go:102] pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:31.215215 1101908 pod_ready.go:92] pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace has status "Ready":"True"
	I0717 20:05:31.215244 1101908 pod_ready.go:81] duration metric: took 4.012199031s waiting for pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:31.215265 1101908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t4mmh" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:31.222461 1101908 pod_ready.go:92] pod "kube-proxy-t4mmh" in "kube-system" namespace has status "Ready":"True"
	I0717 20:05:31.222489 1101908 pod_ready.go:81] duration metric: took 7.215944ms waiting for pod "kube-proxy-t4mmh" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:31.222504 1101908 pod_ready.go:38] duration metric: took 4.031041761s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:05:31.222530 1101908 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:05:31.222606 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:05:31.239450 1101908 api_server.go:72] duration metric: took 5.21948786s to wait for apiserver process to appear ...
	I0717 20:05:31.239494 1101908 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:05:31.239520 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 20:05:31.247985 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 200:
	ok
	I0717 20:05:31.249351 1101908 api_server.go:141] control plane version: v1.16.0
	I0717 20:05:31.249383 1101908 api_server.go:131] duration metric: took 9.880729ms to wait for apiserver health ...
	I0717 20:05:31.249391 1101908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:05:31.255025 1101908 system_pods.go:59] 4 kube-system pods found
	I0717 20:05:31.255062 1101908 system_pods.go:61] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.255069 1101908 system_pods.go:61] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.255076 1101908 system_pods.go:61] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.255086 1101908 system_pods.go:61] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.255095 1101908 system_pods.go:74] duration metric: took 5.697473ms to wait for pod list to return data ...
	I0717 20:05:31.255106 1101908 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:05:31.259740 1101908 default_sa.go:45] found service account: "default"
	I0717 20:05:31.259772 1101908 default_sa.go:55] duration metric: took 4.660789ms for default service account to be created ...
	I0717 20:05:31.259780 1101908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:05:31.264000 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:31.264044 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.264051 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.264081 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.264093 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.264116 1101908 retry.go:31] will retry after 269.941707ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:31.540816 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:31.540865 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.540876 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.540891 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.540922 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.540951 1101908 retry.go:31] will retry after 335.890023ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:32.287639 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:34.776299 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:31.881678 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:31.881721 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.881731 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.881742 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.881754 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.881778 1101908 retry.go:31] will retry after 452.6849ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:32.340889 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:32.340919 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:32.340924 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:32.340931 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:32.340938 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:32.340954 1101908 retry.go:31] will retry after 433.94285ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:32.780743 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:32.780777 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:32.780784 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:32.780795 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:32.780808 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:32.780830 1101908 retry.go:31] will retry after 664.997213ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:33.450870 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:33.450901 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:33.450906 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:33.450912 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:33.450919 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:33.450936 1101908 retry.go:31] will retry after 669.043592ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:34.126116 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:34.126155 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:34.126164 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:34.126177 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:34.126187 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:34.126207 1101908 retry.go:31] will retry after 799.422303ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:34.930555 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:34.930595 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:34.930604 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:34.930614 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:34.930624 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:34.930648 1101908 retry.go:31] will retry after 1.329879988s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:36.266531 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:36.266570 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:36.266578 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:36.266586 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:36.266596 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:36.266616 1101908 retry.go:31] will retry after 1.667039225s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:37.275872 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:39.776283 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:37.940699 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:37.940736 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:37.940746 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:37.940756 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:37.940768 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:37.940793 1101908 retry.go:31] will retry after 1.426011935s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:39.371704 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:39.371738 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:39.371743 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:39.371750 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:39.371757 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:39.371775 1101908 retry.go:31] will retry after 2.864830097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:42.276143 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:44.775621 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:42.241652 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:42.241693 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:42.241701 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:42.241713 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:42.241723 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:42.241744 1101908 retry.go:31] will retry after 2.785860959s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:45.034761 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:45.034793 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:45.034798 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:45.034806 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:45.034818 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:45.034839 1101908 retry.go:31] will retry after 3.037872313s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:46.776795 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:49.276343 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:48.078790 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:48.078826 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:48.078831 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:48.078842 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:48.078849 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:48.078867 1101908 retry.go:31] will retry after 4.546196458s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:51.777942 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:54.274279 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:52.631941 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:52.631986 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:52.631995 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:52.632006 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:52.632017 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:52.632043 1101908 retry.go:31] will retry after 6.391777088s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:56.276359 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:58.277520 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:59.036918 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:59.036951 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:59.036956 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:59.036963 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:59.036970 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:59.036988 1101908 retry.go:31] will retry after 5.758521304s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:06:00.776149 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:03.276291 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:05.276530 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:04.801914 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:06:04.801944 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:06:04.801950 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:06:04.801958 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:06:04.801965 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:06:04.801982 1101908 retry.go:31] will retry after 7.046104479s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:06:07.777447 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:10.275741 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:12.776577 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:14.776717 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:11.856116 1101908 system_pods.go:86] 8 kube-system pods found
	I0717 20:06:11.856165 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:06:11.856175 1101908 system_pods.go:89] "etcd-old-k8s-version-149000" [702c8e9f-d99a-4766-af97-550dc956f093] Pending
	I0717 20:06:11.856183 1101908 system_pods.go:89] "kube-apiserver-old-k8s-version-149000" [0f0c9817-f4c9-4266-b576-c270cea11b4b] Pending
	I0717 20:06:11.856191 1101908 system_pods.go:89] "kube-controller-manager-old-k8s-version-149000" [539db0c4-6e8c-42eb-9b73-686de5f6c7bf] Running
	I0717 20:06:11.856207 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:06:11.856216 1101908 system_pods.go:89] "kube-scheduler-old-k8s-version-149000" [5a27a0f7-c6c9-4324-a51c-d33c205d8724] Running
	I0717 20:06:11.856295 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:06:11.856308 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:06:11.856336 1101908 retry.go:31] will retry after 13.224383762s: missing components: etcd, kube-apiserver
	I0717 20:06:16.779816 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:19.275840 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:25.091227 1101908 system_pods.go:86] 8 kube-system pods found
	I0717 20:06:25.091272 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:06:25.091281 1101908 system_pods.go:89] "etcd-old-k8s-version-149000" [702c8e9f-d99a-4766-af97-550dc956f093] Running
	I0717 20:06:25.091288 1101908 system_pods.go:89] "kube-apiserver-old-k8s-version-149000" [0f0c9817-f4c9-4266-b576-c270cea11b4b] Running
	I0717 20:06:25.091298 1101908 system_pods.go:89] "kube-controller-manager-old-k8s-version-149000" [539db0c4-6e8c-42eb-9b73-686de5f6c7bf] Running
	I0717 20:06:25.091305 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:06:25.091312 1101908 system_pods.go:89] "kube-scheduler-old-k8s-version-149000" [5a27a0f7-c6c9-4324-a51c-d33c205d8724] Running
	I0717 20:06:25.091324 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:06:25.091337 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:06:25.091348 1101908 system_pods.go:126] duration metric: took 53.831561334s to wait for k8s-apps to be running ...
	I0717 20:06:25.091360 1101908 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:06:25.091455 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:06:25.119739 1101908 system_svc.go:56] duration metric: took 28.348212ms WaitForService to wait for kubelet.
	I0717 20:06:25.119804 1101908 kubeadm.go:581] duration metric: took 59.099852409s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:06:25.119854 1101908 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:06:25.123561 1101908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:06:25.123592 1101908 node_conditions.go:123] node cpu capacity is 2
	I0717 20:06:25.123606 1101908 node_conditions.go:105] duration metric: took 3.739793ms to run NodePressure ...
	I0717 20:06:25.123618 1101908 start.go:228] waiting for startup goroutines ...
	I0717 20:06:25.123624 1101908 start.go:233] waiting for cluster config update ...
	I0717 20:06:25.123669 1101908 start.go:242] writing updated cluster config ...
	I0717 20:06:25.124104 1101908 ssh_runner.go:195] Run: rm -f paused
	I0717 20:06:25.182838 1101908 start.go:578] kubectl: 1.27.3, cluster: 1.16.0 (minor skew: 11)
	I0717 20:06:25.185766 1101908 out.go:177] 
	W0717 20:06:25.188227 1101908 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.16.0.
	I0717 20:06:25.190452 1101908 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0717 20:06:25.192660 1101908 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-149000" cluster and "default" namespace by default
	I0717 20:06:21.776152 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:23.776276 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:25.781589 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:28.278450 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:30.775293 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:33.276069 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:35.775650 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:37.777006 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:40.275701 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:42.774969 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:44.775928 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:46.776363 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:48.786345 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:51.276618 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:53.776161 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:56.276037 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:58.276310 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:00.276357 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:02.775722 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:04.775945 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:07.280130 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:09.776589 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:12.277066 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:14.775525 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:17.275601 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:19.777143 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:22.286857 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:24.775908 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:26.779341 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:29.275732 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:31.276783 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:33.776286 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:36.274383 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:38.275384 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:40.775469 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:42.776331 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:44.776843 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:47.276067 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:49.276907 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:51.277652 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:53.776315 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:55.780034 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:58.276277 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:00.776903 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:03.276429 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:05.277182 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:07.776330 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:09.777528 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:12.275388 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:14.275926 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:16.776757 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:19.276466 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:21.276544 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:23.775888 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:25.778534 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:28.277897 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:30.775389 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:32.777134 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:34.777503 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:37.276492 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:39.775380 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:41.777135 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:44.276305 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:45.868652 1103141 pod_ready.go:81] duration metric: took 4m0.000895459s waiting for pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace to be "Ready" ...
	E0717 20:08:45.868703 1103141 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:08:45.868714 1103141 pod_ready.go:38] duration metric: took 4m2.373683506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:08:45.868742 1103141 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:08:45.868791 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:08:45.868907 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:08:45.926927 1103141 cri.go:89] found id: "b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:45.926965 1103141 cri.go:89] found id: ""
	I0717 20:08:45.926977 1103141 logs.go:284] 1 containers: [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f]
	I0717 20:08:45.927049 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:45.932247 1103141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:08:45.932335 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:08:45.976080 1103141 cri.go:89] found id: "6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:45.976176 1103141 cri.go:89] found id: ""
	I0717 20:08:45.976200 1103141 logs.go:284] 1 containers: [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8]
	I0717 20:08:45.976287 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:45.981650 1103141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:08:45.981738 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:08:46.017454 1103141 cri.go:89] found id: "9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:46.017487 1103141 cri.go:89] found id: ""
	I0717 20:08:46.017495 1103141 logs.go:284] 1 containers: [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e]
	I0717 20:08:46.017578 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.023282 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:08:46.023361 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:08:46.055969 1103141 cri.go:89] found id: "20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:46.055998 1103141 cri.go:89] found id: ""
	I0717 20:08:46.056009 1103141 logs.go:284] 1 containers: [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52]
	I0717 20:08:46.056063 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.061090 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:08:46.061181 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:08:46.094968 1103141 cri.go:89] found id: "c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:46.095001 1103141 cri.go:89] found id: ""
	I0717 20:08:46.095012 1103141 logs.go:284] 1 containers: [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39]
	I0717 20:08:46.095089 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.099940 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:08:46.100018 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:08:46.132535 1103141 cri.go:89] found id: "7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:46.132571 1103141 cri.go:89] found id: ""
	I0717 20:08:46.132586 1103141 logs.go:284] 1 containers: [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd]
	I0717 20:08:46.132655 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.138029 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:08:46.138112 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:08:46.179589 1103141 cri.go:89] found id: ""
	I0717 20:08:46.179620 1103141 logs.go:284] 0 containers: []
	W0717 20:08:46.179632 1103141 logs.go:286] No container was found matching "kindnet"
	I0717 20:08:46.179640 1103141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:08:46.179728 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:08:46.216615 1103141 cri.go:89] found id: "1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:46.216642 1103141 cri.go:89] found id: ""
	I0717 20:08:46.216650 1103141 logs.go:284] 1 containers: [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea]
	I0717 20:08:46.216782 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.223815 1103141 logs.go:123] Gathering logs for kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] ...
	I0717 20:08:46.223849 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:46.274046 1103141 logs.go:123] Gathering logs for kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] ...
	I0717 20:08:46.274093 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:46.314239 1103141 logs.go:123] Gathering logs for kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] ...
	I0717 20:08:46.314285 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:46.372521 1103141 logs.go:123] Gathering logs for kubelet ...
	I0717 20:08:46.372568 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:08:46.473516 1103141 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:08:46.473576 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:08:46.628553 1103141 logs.go:123] Gathering logs for coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] ...
	I0717 20:08:46.628626 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:46.663929 1103141 logs.go:123] Gathering logs for storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] ...
	I0717 20:08:46.663976 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:46.699494 1103141 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:08:46.699528 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:08:47.188357 1103141 logs.go:123] Gathering logs for container status ...
	I0717 20:08:47.188415 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:08:47.246863 1103141 logs.go:123] Gathering logs for dmesg ...
	I0717 20:08:47.246902 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:08:47.262383 1103141 logs.go:123] Gathering logs for kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] ...
	I0717 20:08:47.262418 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:47.315465 1103141 logs.go:123] Gathering logs for etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] ...
	I0717 20:08:47.315506 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:49.862911 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:08:49.880685 1103141 api_server.go:72] duration metric: took 4m9.416465331s to wait for apiserver process to appear ...
	I0717 20:08:49.880717 1103141 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:08:49.880763 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:08:49.880828 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:08:49.921832 1103141 cri.go:89] found id: "b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:49.921858 1103141 cri.go:89] found id: ""
	I0717 20:08:49.921867 1103141 logs.go:284] 1 containers: [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f]
	I0717 20:08:49.921922 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:49.927202 1103141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:08:49.927281 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:08:49.962760 1103141 cri.go:89] found id: "6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:49.962784 1103141 cri.go:89] found id: ""
	I0717 20:08:49.962793 1103141 logs.go:284] 1 containers: [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8]
	I0717 20:08:49.962850 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:49.968029 1103141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:08:49.968123 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:08:50.004191 1103141 cri.go:89] found id: "9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:50.004230 1103141 cri.go:89] found id: ""
	I0717 20:08:50.004239 1103141 logs.go:284] 1 containers: [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e]
	I0717 20:08:50.004308 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.009150 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:08:50.009223 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:08:50.041085 1103141 cri.go:89] found id: "20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:50.041109 1103141 cri.go:89] found id: ""
	I0717 20:08:50.041118 1103141 logs.go:284] 1 containers: [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52]
	I0717 20:08:50.041170 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.045541 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:08:50.045632 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:08:50.082404 1103141 cri.go:89] found id: "c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:50.082439 1103141 cri.go:89] found id: ""
	I0717 20:08:50.082448 1103141 logs.go:284] 1 containers: [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39]
	I0717 20:08:50.082510 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.087838 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:08:50.087928 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:08:50.130019 1103141 cri.go:89] found id: "7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:50.130053 1103141 cri.go:89] found id: ""
	I0717 20:08:50.130065 1103141 logs.go:284] 1 containers: [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd]
	I0717 20:08:50.130134 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.134894 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:08:50.134974 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:08:50.171033 1103141 cri.go:89] found id: ""
	I0717 20:08:50.171070 1103141 logs.go:284] 0 containers: []
	W0717 20:08:50.171081 1103141 logs.go:286] No container was found matching "kindnet"
	I0717 20:08:50.171088 1103141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:08:50.171158 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:08:50.206952 1103141 cri.go:89] found id: "1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:50.206984 1103141 cri.go:89] found id: ""
	I0717 20:08:50.206996 1103141 logs.go:284] 1 containers: [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea]
	I0717 20:08:50.207064 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.211123 1103141 logs.go:123] Gathering logs for etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] ...
	I0717 20:08:50.211152 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:50.257982 1103141 logs.go:123] Gathering logs for coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] ...
	I0717 20:08:50.258031 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:50.293315 1103141 logs.go:123] Gathering logs for kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] ...
	I0717 20:08:50.293371 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:50.343183 1103141 logs.go:123] Gathering logs for kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] ...
	I0717 20:08:50.343235 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:50.381821 1103141 logs.go:123] Gathering logs for kubelet ...
	I0717 20:08:50.381869 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:08:50.487833 1103141 logs.go:123] Gathering logs for dmesg ...
	I0717 20:08:50.487878 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:08:50.504213 1103141 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:08:50.504259 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:08:50.638194 1103141 logs.go:123] Gathering logs for kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] ...
	I0717 20:08:50.638230 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:50.685572 1103141 logs.go:123] Gathering logs for kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] ...
	I0717 20:08:50.685627 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:50.740133 1103141 logs.go:123] Gathering logs for storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] ...
	I0717 20:08:50.740188 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:50.778023 1103141 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:08:50.778059 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:08:51.310702 1103141 logs.go:123] Gathering logs for container status ...
	I0717 20:08:51.310758 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:08:53.857949 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 20:08:53.864729 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0717 20:08:53.866575 1103141 api_server.go:141] control plane version: v1.27.3
	I0717 20:08:53.866605 1103141 api_server.go:131] duration metric: took 3.985881495s to wait for apiserver health ...
	I0717 20:08:53.866613 1103141 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:08:53.866638 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:08:53.866687 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:08:53.902213 1103141 cri.go:89] found id: "b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:53.902243 1103141 cri.go:89] found id: ""
	I0717 20:08:53.902252 1103141 logs.go:284] 1 containers: [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f]
	I0717 20:08:53.902320 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:53.906976 1103141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:08:53.907073 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:08:53.946040 1103141 cri.go:89] found id: "6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:53.946063 1103141 cri.go:89] found id: ""
	I0717 20:08:53.946071 1103141 logs.go:284] 1 containers: [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8]
	I0717 20:08:53.946150 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:53.951893 1103141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:08:53.951963 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:08:53.988546 1103141 cri.go:89] found id: "9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:53.988583 1103141 cri.go:89] found id: ""
	I0717 20:08:53.988594 1103141 logs.go:284] 1 containers: [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e]
	I0717 20:08:53.988647 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:53.994338 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:08:53.994428 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:08:54.030092 1103141 cri.go:89] found id: "20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:54.030123 1103141 cri.go:89] found id: ""
	I0717 20:08:54.030133 1103141 logs.go:284] 1 containers: [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52]
	I0717 20:08:54.030198 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.035081 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:08:54.035189 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:08:54.069845 1103141 cri.go:89] found id: "c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:54.069878 1103141 cri.go:89] found id: ""
	I0717 20:08:54.069889 1103141 logs.go:284] 1 containers: [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39]
	I0717 20:08:54.069952 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.075257 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:08:54.075334 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:08:54.114477 1103141 cri.go:89] found id: "7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:54.114516 1103141 cri.go:89] found id: ""
	I0717 20:08:54.114527 1103141 logs.go:284] 1 containers: [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd]
	I0717 20:08:54.114602 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.119374 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:08:54.119464 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:08:54.160628 1103141 cri.go:89] found id: ""
	I0717 20:08:54.160660 1103141 logs.go:284] 0 containers: []
	W0717 20:08:54.160672 1103141 logs.go:286] No container was found matching "kindnet"
	I0717 20:08:54.160680 1103141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:08:54.160752 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:08:54.200535 1103141 cri.go:89] found id: "1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:54.200662 1103141 cri.go:89] found id: ""
	I0717 20:08:54.200674 1103141 logs.go:284] 1 containers: [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea]
	I0717 20:08:54.200736 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.205923 1103141 logs.go:123] Gathering logs for dmesg ...
	I0717 20:08:54.205958 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:08:54.221020 1103141 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:08:54.221057 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:08:54.381122 1103141 logs.go:123] Gathering logs for coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] ...
	I0717 20:08:54.381163 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:54.417207 1103141 logs.go:123] Gathering logs for kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] ...
	I0717 20:08:54.417255 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:54.469346 1103141 logs.go:123] Gathering logs for kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] ...
	I0717 20:08:54.469389 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:54.513216 1103141 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:08:54.513258 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:08:55.056597 1103141 logs.go:123] Gathering logs for kubelet ...
	I0717 20:08:55.056644 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:08:55.168622 1103141 logs.go:123] Gathering logs for kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] ...
	I0717 20:08:55.168669 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:55.220979 1103141 logs.go:123] Gathering logs for etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] ...
	I0717 20:08:55.221038 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:55.264086 1103141 logs.go:123] Gathering logs for kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] ...
	I0717 20:08:55.264124 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:55.317931 1103141 logs.go:123] Gathering logs for storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] ...
	I0717 20:08:55.317974 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:55.357733 1103141 logs.go:123] Gathering logs for container status ...
	I0717 20:08:55.357770 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:08:57.919739 1103141 system_pods.go:59] 8 kube-system pods found
	I0717 20:08:57.919785 1103141 system_pods.go:61] "coredns-5d78c9869d-gq2b2" [833e67fa-16e2-4a5c-8c39-16cc4fbd411e] Running
	I0717 20:08:57.919795 1103141 system_pods.go:61] "etcd-embed-certs-114855" [7209c449-fbf1-4343-8636-e872684db832] Running
	I0717 20:08:57.919808 1103141 system_pods.go:61] "kube-apiserver-embed-certs-114855" [d926dfc1-71e8-44cb-9efe-4c37e0982b02] Running
	I0717 20:08:57.919817 1103141 system_pods.go:61] "kube-controller-manager-embed-certs-114855" [e16de906-3b66-4882-83ca-8d5476d45d96] Running
	I0717 20:08:57.919823 1103141 system_pods.go:61] "kube-proxy-bfvnl" [6f7fb55d-fa9f-4d08-b4ab-3814af550c01] Running
	I0717 20:08:57.919830 1103141 system_pods.go:61] "kube-scheduler-embed-certs-114855" [828c7a2f-dd4b-4318-8199-026970bb3159] Running
	I0717 20:08:57.919850 1103141 system_pods.go:61] "metrics-server-74d5c6b9c-jvfz8" [f861e320-9125-4081-b043-c90d8b027f71] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:08:57.919859 1103141 system_pods.go:61] "storage-provisioner" [994ec0db-08aa-4dd5-a137-1f6984051e65] Running
	I0717 20:08:57.919866 1103141 system_pods.go:74] duration metric: took 4.053247674s to wait for pod list to return data ...
	I0717 20:08:57.919876 1103141 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:08:57.925726 1103141 default_sa.go:45] found service account: "default"
	I0717 20:08:57.925756 1103141 default_sa.go:55] duration metric: took 5.874288ms for default service account to be created ...
	I0717 20:08:57.925765 1103141 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:08:57.934835 1103141 system_pods.go:86] 8 kube-system pods found
	I0717 20:08:57.934869 1103141 system_pods.go:89] "coredns-5d78c9869d-gq2b2" [833e67fa-16e2-4a5c-8c39-16cc4fbd411e] Running
	I0717 20:08:57.934875 1103141 system_pods.go:89] "etcd-embed-certs-114855" [7209c449-fbf1-4343-8636-e872684db832] Running
	I0717 20:08:57.934880 1103141 system_pods.go:89] "kube-apiserver-embed-certs-114855" [d926dfc1-71e8-44cb-9efe-4c37e0982b02] Running
	I0717 20:08:57.934886 1103141 system_pods.go:89] "kube-controller-manager-embed-certs-114855" [e16de906-3b66-4882-83ca-8d5476d45d96] Running
	I0717 20:08:57.934890 1103141 system_pods.go:89] "kube-proxy-bfvnl" [6f7fb55d-fa9f-4d08-b4ab-3814af550c01] Running
	I0717 20:08:57.934894 1103141 system_pods.go:89] "kube-scheduler-embed-certs-114855" [828c7a2f-dd4b-4318-8199-026970bb3159] Running
	I0717 20:08:57.934903 1103141 system_pods.go:89] "metrics-server-74d5c6b9c-jvfz8" [f861e320-9125-4081-b043-c90d8b027f71] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:08:57.934908 1103141 system_pods.go:89] "storage-provisioner" [994ec0db-08aa-4dd5-a137-1f6984051e65] Running
	I0717 20:08:57.934917 1103141 system_pods.go:126] duration metric: took 9.146607ms to wait for k8s-apps to be running ...
	I0717 20:08:57.934924 1103141 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:08:57.934972 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:08:57.952480 1103141 system_svc.go:56] duration metric: took 17.537719ms WaitForService to wait for kubelet.
	I0717 20:08:57.952531 1103141 kubeadm.go:581] duration metric: took 4m17.48831739s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:08:57.952581 1103141 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:08:57.956510 1103141 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:08:57.956581 1103141 node_conditions.go:123] node cpu capacity is 2
	I0717 20:08:57.956599 1103141 node_conditions.go:105] duration metric: took 4.010178ms to run NodePressure ...
	I0717 20:08:57.956633 1103141 start.go:228] waiting for startup goroutines ...
	I0717 20:08:57.956646 1103141 start.go:233] waiting for cluster config update ...
	I0717 20:08:57.956665 1103141 start.go:242] writing updated cluster config ...
	I0717 20:08:57.957107 1103141 ssh_runner.go:195] Run: rm -f paused
	I0717 20:08:58.016891 1103141 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:08:58.019566 1103141 out.go:177] * Done! kubectl is now configured to use "embed-certs-114855" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 19:59:16 UTC, ends at Mon 2023-07-17 20:15:27 UTC. --
	Jul 17 20:15:26 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:26.955354692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9ba57c5f-8a80-44ad-8676-6afeda8bb869 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:26 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:26.955579057Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9ba57c5f-8a80-44ad-8676-6afeda8bb869 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:26 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:26.998413298Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c8a77453-2e21-4cc0-b3a5-fbe2edea394e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:26 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:26.998508178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c8a77453-2e21-4cc0-b3a5-fbe2edea394e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:26 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:26.998762599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c8a77453-2e21-4cc0-b3a5-fbe2edea394e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.039798633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7cae791f-90fa-46da-96f4-c9f7686e1726 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.039894564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7cae791f-90fa-46da-96f4-c9f7686e1726 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.040144121Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7cae791f-90fa-46da-96f4-c9f7686e1726 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.090158131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=93ca9c16-c4d6-473f-8d84-40618b815d00 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.090253625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=93ca9c16-c4d6-473f-8d84-40618b815d00 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.090432319Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=93ca9c16-c4d6-473f-8d84-40618b815d00 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.137653510Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2aa76161-99aa-4387-93ea-7df7c278fe0d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.137784273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2aa76161-99aa-4387-93ea-7df7c278fe0d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.137967018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2aa76161-99aa-4387-93ea-7df7c278fe0d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.178050884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a6bf07b1-a87b-43d2-8874-b7eaf37ec7e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.178152700Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a6bf07b1-a87b-43d2-8874-b7eaf37ec7e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.178368141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a6bf07b1-a87b-43d2-8874-b7eaf37ec7e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.184862963Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=bca06b1f-ced9-4d02-9169-387209c76d51 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.185169151Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b585be276b5b7ce270f241de33195092ae32dd182412c7dcffd10fbec5e21d5a,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-cxzws,Uid:493d4f17-8ddf-4d76-aa86-33fc669de018,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624328379462031,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-cxzws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 493d4f17-8ddf-4d76-aa86-33fc669de018,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T20:05:28.021435313Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-ldwkf,Uid:1f5b5b78-acc2-460b-971e-349b7
f30a211,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624327694634894,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T20:05:27.34243108Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cf78f6d0-4bf8-449c-8231-0df3920b8b1f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624327472257942,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0d
f3920b8b1f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-07-17T20:05:27.115350213Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&PodSandboxMetadata{Name:kube-proxy-t4mmh,Uid:570c5c22-efff-40bb-8ade
-e1febdbff4f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624326051612807,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5c22-efff-40bb-8ade-e1febdbff4f1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T20:05:25.689486702Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-149000,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624299295723724,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1
d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-07-17T20:04:58.840093378Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-149000,Uid:a056e5359f37632ba7566002c292f817,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624299260634772,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a056e5359f37632ba7566002c292f817,kubernetes.io/config.seen: 2023-07-17T20:04:58.841478626Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff510
4b0bc4bf,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-149000,Uid:e1346e3f0df0827495f5afc7d45c69f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624299210438530,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e1346e3f0df0827495f5afc7d45c69f1,kubernetes.io/config.seen: 2023-07-17T20:04:58.835207013Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-149000,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624299203289375,Labels:map[string]string{component: kube-controller-manager,io.kubernete
s.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-07-17T20:04:58.83841542Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=bca06b1f-ced9-4d02-9169-387209c76d51 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.186142791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=56750681-9546-440a-936c-d7a1c30a66ff name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.186197029Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=56750681-9546-440a-936c-d7a1c30a66ff name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.186345064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=56750681-9546-440a-936c-d7a1c30a66ff name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.226619063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0fa69f8e-943e-43dc-8caa-36f16c937ec8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.226689838Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0fa69f8e-943e-43dc-8caa-36f16c937ec8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:15:27 old-k8s-version-149000 crio[711]: time="2023-07-17 20:15:27.226871665Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0fa69f8e-943e-43dc-8caa-36f16c937ec8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	bf6835e7df11c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   201373d9426a8
	16dcbd7056062       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   9 minutes ago       Running             coredns                   0                   464af7fa5ab48
	d2b328b6d3a7f       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   9 minutes ago       Running             kube-proxy                0                   9711ef4b24717
	5176d659c2276       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   c913970e62f86
	d1a21acc33de8       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   5b423dca17cb5
	9fa9baa16256a       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   bbf496ac378c4
	a07469cd5bd2e       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   4845123d26cfc
	
	* 
	* ==> coredns [16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643] <==
	* .:53
	2023-07-17T20:05:28.462Z [INFO] plugin/reload: Running configuration MD5 = 06ff7f9bb57317d7ab02f5fb9baaa00d
	2023-07-17T20:05:28.463Z [INFO] CoreDNS-1.6.2
	2023-07-17T20:05:28.463Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-07-17T20:05:28.480Z [INFO] 127.0.0.1:33108 - 42238 "HINFO IN 8359485099469103757.6097109787848091355. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014647044s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-149000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-149000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=old-k8s-version-149000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T20_05_10_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 20:05:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 20:15:05 +0000   Mon, 17 Jul 2023 20:05:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 20:15:05 +0000   Mon, 17 Jul 2023 20:05:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 20:15:05 +0000   Mon, 17 Jul 2023 20:05:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 20:15:05 +0000   Mon, 17 Jul 2023 20:05:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.177
	  Hostname:    old-k8s-version-149000
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 6b77956aa43d4cc8852ff5e5c774a7ae
	 System UUID:                6b77956a-a43d-4cc8-852f-f5e5c774a7ae
	 Boot ID:                    f3291a84-0139-43be-94c0-25c5c67f2cac
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-ldwkf                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-149000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                kube-apiserver-old-k8s-version-149000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                kube-controller-manager-old-k8s-version-149000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                kube-proxy-t4mmh                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-149000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                metrics-server-74d5856cc6-cxzws                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  Starting                 10m                kubelet, old-k8s-version-149000     Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-149000     Node old-k8s-version-149000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-149000     Node old-k8s-version-149000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-149000     Node old-k8s-version-149000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet, old-k8s-version-149000     Updated Node Allocatable limit across pods
	  Normal  Starting                 9m59s              kube-proxy, old-k8s-version-149000  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jul17 19:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.092674] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.241092] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.726038] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.163907] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.676919] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.839557] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.127668] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.189640] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.135101] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.265498] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[ +20.134693] systemd-fstab-generator[1035]: Ignoring "noauto" for root device
	[  +0.489721] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul17 20:00] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.755286] kauditd_printk_skb: 2 callbacks suppressed
	[Jul17 20:04] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.747541] systemd-fstab-generator[3108]: Ignoring "noauto" for root device
	[Jul17 20:05] kauditd_printk_skb: 4 callbacks suppressed
	
	* 
	* ==> etcd [5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52] <==
	* 2023-07-17 20:05:01.575872 I | raft: 5a25ba9993a27c1b became follower at term 0
	2023-07-17 20:05:01.575908 I | raft: newRaft 5a25ba9993a27c1b [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-07-17 20:05:01.575930 I | raft: 5a25ba9993a27c1b became follower at term 1
	2023-07-17 20:05:01.595004 W | auth: simple token is not cryptographically signed
	2023-07-17 20:05:01.601503 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-07-17 20:05:01.603740 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-17 20:05:01.603945 I | embed: listening for metrics on http://192.168.50.177:2381
	2023-07-17 20:05:01.604288 I | etcdserver: 5a25ba9993a27c1b as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-07-17 20:05:01.605180 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-17 20:05:01.605457 I | etcdserver/membership: added member 5a25ba9993a27c1b [https://192.168.50.177:2380] to cluster 3f06f8e9368d6a9e
	2023-07-17 20:05:01.676646 I | raft: 5a25ba9993a27c1b is starting a new election at term 1
	2023-07-17 20:05:01.676921 I | raft: 5a25ba9993a27c1b became candidate at term 2
	2023-07-17 20:05:01.677033 I | raft: 5a25ba9993a27c1b received MsgVoteResp from 5a25ba9993a27c1b at term 2
	2023-07-17 20:05:01.677064 I | raft: 5a25ba9993a27c1b became leader at term 2
	2023-07-17 20:05:01.677179 I | raft: raft.node: 5a25ba9993a27c1b elected leader 5a25ba9993a27c1b at term 2
	2023-07-17 20:05:01.677792 I | etcdserver: published {Name:old-k8s-version-149000 ClientURLs:[https://192.168.50.177:2379]} to cluster 3f06f8e9368d6a9e
	2023-07-17 20:05:01.677907 I | embed: ready to serve client requests
	2023-07-17 20:05:01.677975 I | embed: ready to serve client requests
	2023-07-17 20:05:01.679237 I | embed: serving client requests on 192.168.50.177:2379
	2023-07-17 20:05:01.679405 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-17 20:05:01.679783 I | etcdserver: setting up the initial cluster version to 3.3
	2023-07-17 20:05:01.687694 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-07-17 20:05:01.687771 I | etcdserver/api: enabled capabilities for version 3.3
	2023-07-17 20:15:01.913475 I | mvcc: store.index: compact 679
	2023-07-17 20:15:01.916690 I | mvcc: finished scheduled compaction at 679 (took 2.624362ms)
	
	* 
	* ==> kernel <==
	*  20:15:27 up 16 min,  0 users,  load average: 0.50, 0.43, 0.27
	Linux old-k8s-version-149000 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866] <==
	* I0717 20:08:28.652992       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 20:08:28.653145       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 20:08:28.653235       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:08:28.653252       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:10:06.342295       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 20:10:06.342844       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 20:10:06.342975       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:10:06.343023       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:11:06.343455       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 20:11:06.343782       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 20:11:06.343859       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:11:06.343883       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:13:06.344383       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 20:13:06.344583       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 20:13:06.344656       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:13:06.344668       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:15:06.346041       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 20:15:06.346588       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 20:15:06.346724       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:15:06.346778       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c] <==
	* E0717 20:08:57.393105       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:09:10.653395       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:09:27.645711       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:09:42.656049       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:09:57.898244       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:10:14.658340       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:10:28.160981       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:10:46.660409       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:10:58.413500       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:11:18.662843       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:11:28.665735       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:11:50.665084       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:11:58.918090       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:12:22.667461       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:12:29.170326       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:12:54.669756       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:12:59.422415       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:13:26.671888       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:13:29.674402       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:13:58.675379       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:13:59.927054       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0717 20:14:30.178954       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:14:30.678251       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:15:00.431689       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:15:02.680092       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e] <==
	* W0717 20:05:28.495472       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0717 20:05:28.529210       1 node.go:135] Successfully retrieved node IP: 192.168.50.177
	I0717 20:05:28.529351       1 server_others.go:149] Using iptables Proxier.
	I0717 20:05:28.530891       1 server.go:529] Version: v1.16.0
	I0717 20:05:28.534086       1 config.go:313] Starting service config controller
	I0717 20:05:28.534477       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0717 20:05:28.537757       1 config.go:131] Starting endpoints config controller
	I0717 20:05:28.557264       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0717 20:05:28.653335       1 shared_informer.go:204] Caches are synced for service config 
	I0717 20:05:28.658263       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0] <==
	* I0717 20:05:05.339633       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0717 20:05:05.389246       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 20:05:05.389370       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 20:05:05.394737       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 20:05:05.395010       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 20:05:05.395074       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 20:05:05.395126       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 20:05:05.395185       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 20:05:05.395298       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 20:05:05.395349       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 20:05:05.395393       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 20:05:05.396892       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 20:05:06.391861       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 20:05:06.398008       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 20:05:06.399950       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 20:05:06.400164       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 20:05:06.401449       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 20:05:06.402452       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 20:05:06.403383       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 20:05:06.407264       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 20:05:06.407870       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 20:05:06.410066       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 20:05:06.412220       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 20:05:25.318005       1 factory.go:585] pod is already present in the activeQ
	E0717 20:05:25.355627       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 19:59:16 UTC, ends at Mon 2023-07-17 20:15:27 UTC. --
	Jul 17 20:10:48 old-k8s-version-149000 kubelet[3114]: E0717 20:10:48.147224    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:11:00 old-k8s-version-149000 kubelet[3114]: E0717 20:11:00.170510    3114 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 20:11:00 old-k8s-version-149000 kubelet[3114]: E0717 20:11:00.170710    3114 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 20:11:00 old-k8s-version-149000 kubelet[3114]: E0717 20:11:00.170821    3114 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 20:11:00 old-k8s-version-149000 kubelet[3114]: E0717 20:11:00.170863    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jul 17 20:11:14 old-k8s-version-149000 kubelet[3114]: E0717 20:11:14.148429    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:11:27 old-k8s-version-149000 kubelet[3114]: E0717 20:11:27.147029    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:11:40 old-k8s-version-149000 kubelet[3114]: E0717 20:11:40.147047    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:11:54 old-k8s-version-149000 kubelet[3114]: E0717 20:11:54.148136    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:12:09 old-k8s-version-149000 kubelet[3114]: E0717 20:12:09.146830    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:12:21 old-k8s-version-149000 kubelet[3114]: E0717 20:12:21.146776    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:12:36 old-k8s-version-149000 kubelet[3114]: E0717 20:12:36.147486    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:12:51 old-k8s-version-149000 kubelet[3114]: E0717 20:12:51.146712    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:13:06 old-k8s-version-149000 kubelet[3114]: E0717 20:13:06.146672    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:13:18 old-k8s-version-149000 kubelet[3114]: E0717 20:13:18.147263    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:13:32 old-k8s-version-149000 kubelet[3114]: E0717 20:13:32.148049    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:13:45 old-k8s-version-149000 kubelet[3114]: E0717 20:13:45.147321    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:13:56 old-k8s-version-149000 kubelet[3114]: E0717 20:13:56.147234    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:14:09 old-k8s-version-149000 kubelet[3114]: E0717 20:14:09.146764    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:14:20 old-k8s-version-149000 kubelet[3114]: E0717 20:14:20.148766    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:14:34 old-k8s-version-149000 kubelet[3114]: E0717 20:14:34.146930    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:14:47 old-k8s-version-149000 kubelet[3114]: E0717 20:14:47.146975    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:14:58 old-k8s-version-149000 kubelet[3114]: E0717 20:14:58.240263    3114 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jul 17 20:15:02 old-k8s-version-149000 kubelet[3114]: E0717 20:15:02.146637    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:15:13 old-k8s-version-149000 kubelet[3114]: E0717 20:15:13.146705    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df] <==
	* I0717 20:05:28.805642       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 20:05:28.829060       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 20:05:28.829157       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 20:05:28.842003       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 20:05:28.842302       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-149000_3b7660e9-00ce-412a-8d74-43e33a1fc1be!
	I0717 20:05:28.844036       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2435fbc8-fa69-40c3-bcfe-3d130ef0c83f", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-149000_3b7660e9-00ce-412a-8d74-43e33a1fc1be became leader
	I0717 20:05:28.943370       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-149000_3b7660e9-00ce-412a-8d74-43e33a1fc1be!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-149000 -n old-k8s-version-149000
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-149000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-cxzws
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-149000 describe pod metrics-server-74d5856cc6-cxzws
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-149000 describe pod metrics-server-74d5856cc6-cxzws: exit status 1 (84.264009ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-cxzws" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-149000 describe pod metrics-server-74d5856cc6-cxzws: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 20:09:24.379361 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 20:11:00.133445 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 20:11:03.520278 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-114855 -n embed-certs-114855
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-07-17 20:17:58.634551232 +0000 UTC m=+5679.943246790
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-114855 -n embed-certs-114855
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-114855 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-114855 logs -n 25: (1.270434442s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-711413  | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC |                     |
	|         | default-k8s-diff-port-711413                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-891260             | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-891260                  | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-891260 --memory=2200 --alsologtostderr   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-891260 sudo                              | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p                                                     | disable-driver-mounts-178387 | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | disable-driver-mounts-178387                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-149000             | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-149000                              | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-408472                  | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-408472                                   | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-711413       | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 20:03 UTC |
	|         | default-k8s-diff-port-711413                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-114855            | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 19:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-114855                 | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC | 17 Jul 23 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-149000                              | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	| start   | -p auto-395471 --memory=3072                           | auto-395471                  | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 20:17:54
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 20:17:54.296860 1107949 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:17:54.296986 1107949 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:17:54.296995 1107949 out.go:309] Setting ErrFile to fd 2...
	I0717 20:17:54.296999 1107949 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:17:54.297216 1107949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 20:17:54.297921 1107949 out.go:303] Setting JSON to false
	I0717 20:17:54.299517 1107949 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":18025,"bootTime":1689607049,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 20:17:54.299703 1107949 start.go:138] virtualization: kvm guest
	I0717 20:17:54.304175 1107949 out.go:177] * [auto-395471] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 20:17:54.306710 1107949 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 20:17:54.306725 1107949 notify.go:220] Checking for updates...
	I0717 20:17:54.308644 1107949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:17:54.310607 1107949 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:17:54.312364 1107949 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 20:17:54.314324 1107949 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 20:17:54.316236 1107949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 20:17:54.318624 1107949 config.go:182] Loaded profile config "default-k8s-diff-port-711413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:17:54.318745 1107949 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:17:54.318846 1107949 config.go:182] Loaded profile config "no-preload-408472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:17:54.318994 1107949 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 20:17:54.360093 1107949 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 20:17:54.361771 1107949 start.go:298] selected driver: kvm2
	I0717 20:17:54.361798 1107949 start.go:880] validating driver "kvm2" against <nil>
	I0717 20:17:54.361815 1107949 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 20:17:54.362721 1107949 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 20:17:54.362810 1107949 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 20:17:54.379312 1107949 install.go:137] /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2 version is 1.30.1
	I0717 20:17:54.379385 1107949 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 20:17:54.379744 1107949 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 20:17:54.379802 1107949 cni.go:84] Creating CNI manager for ""
	I0717 20:17:54.379822 1107949 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 20:17:54.379831 1107949 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 20:17:54.379847 1107949 start_flags.go:319] config:
	{Name:auto-395471 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-395471 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni Feat
ureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:17:54.380009 1107949 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 20:17:54.382643 1107949 out.go:177] * Starting control plane node auto-395471 in cluster auto-395471
	I0717 20:17:54.384524 1107949 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 20:17:54.384618 1107949 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 20:17:54.384632 1107949 cache.go:57] Caching tarball of preloaded images
	I0717 20:17:54.384778 1107949 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 20:17:54.384791 1107949 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 20:17:54.384950 1107949 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/config.json ...
	I0717 20:17:54.384981 1107949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/config.json: {Name:mk7856fb3b600bdb285f0e435a53908c958f1add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:17:54.385171 1107949 start.go:365] acquiring machines lock for auto-395471: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 20:17:54.385206 1107949 start.go:369] acquired machines lock for "auto-395471" in 20.36µs
	I0717 20:17:54.385228 1107949 start.go:93] Provisioning new machine with config: &{Name:auto-395471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-395471 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:17:54.385294 1107949 start.go:125] createHost starting for "" (driver="kvm2")
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 19:58:50 UTC, ends at Mon 2023-07-17 20:17:59 UTC. --
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.215132338Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:994ec0db-08aa-4dd5-a137-1f6984051e65,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624283948793677,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-07-17T20:04:43.301910477Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a8b693e844590cf8e81069ce717d47fad82fa1f98dbcf2db6a505aa96d011933,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5c6b9c-jvfz8,Uid:f861e320-9125-4081-b043-c90d8b027f71,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624283503057283,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5c6b9c-jvfz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f861e320-9125-4081-b043-c90d8b027f71,
k8s-app: metrics-server,pod-template-hash: 74d5c6b9c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T20:04:43.158502751Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-gq2b2,Uid:833e67fa-16e2-4a5c-8c39-16cc4fbd411e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624281169070004,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T20:04:40.801980588Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&PodSandboxMetadata{Name:kube-proxy-bfvnl,Uid:6f7fb55d-fa9f-4d08-b4ab-3814a
f550c01,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624281098764435,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T20:04:40.757679348Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-114855,Uid:6e7dce0dd54044c5bead23f2309aa88d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624257820120605,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0dd54044c5bead23f2309a
a88d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.213:8443,kubernetes.io/config.hash: 6e7dce0dd54044c5bead23f2309aa88d,kubernetes.io/config.seen: 2023-07-17T20:04:17.267807063Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-114855,Uid:57c1c5fe39a9ad0e8adcb474b4dff169,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624257812237247,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 57c1c5fe39a9ad0e8adcb474b4dff169,kubernetes.io/config.seen: 2023-07-17T20:04:17.267800332Z,kube
rnetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-114855,Uid:3c2e3fe9483a42bbcf2012a6138b250f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624257792495948,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.213:2379,kubernetes.io/config.hash: 3c2e3fe9483a42bbcf2012a6138b250f,kubernetes.io/config.seen: 2023-07-17T20:04:17.267805900Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-114855,Uid:849a8d0dccd58b0d4de1642f30453709,Namespac
e:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624257766141422,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 849a8d0dccd58b0d4de1642f30453709,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 849a8d0dccd58b0d4de1642f30453709,kubernetes.io/config.seen: 2023-07-17T20:04:17.267804893Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=48b282db-cb93-466e-a9fe-24adfe79dcba name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.215904221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b8e95e5c-e96e-47c6-8eb4-2f2a01abcce3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.216023841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b8e95e5c-e96e-47c6-8eb4-2f2a01abcce3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.216306772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b8e95e5c-e96e-47c6-8eb4-2f2a01abcce3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.245302793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7ecba697-e302-466b-a4e9-7fad24f4d595 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.245428900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7ecba697-e302-466b-a4e9-7fad24f4d595 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.245661700Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7ecba697-e302-466b-a4e9-7fad24f4d595 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.290901554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dd40e986-ed0c-42a4-9d0a-c357e5343436 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.291031388Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dd40e986-ed0c-42a4-9d0a-c357e5343436 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.291445114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dd40e986-ed0c-42a4-9d0a-c357e5343436 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.333987325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5438426c-ae63-420b-8ce7-428f51780b18 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.334091631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5438426c-ae63-420b-8ce7-428f51780b18 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.334395770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5438426c-ae63-420b-8ce7-428f51780b18 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.373676140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ed75e826-e7ed-4972-b474-9633efdcede3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.373770098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ed75e826-e7ed-4972-b474-9633efdcede3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.373963105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ed75e826-e7ed-4972-b474-9633efdcede3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.411428348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9e384043-3d3b-4c7d-a3af-850cfda38799 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.411578760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9e384043-3d3b-4c7d-a3af-850cfda38799 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.412042127Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9e384043-3d3b-4c7d-a3af-850cfda38799 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.452506920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e8afbbff-1d7c-4345-a849-a912a1a8e2a0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.452640583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e8afbbff-1d7c-4345-a849-a912a1a8e2a0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.452917927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e8afbbff-1d7c-4345-a849-a912a1a8e2a0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.491289583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6a794920-16fe-4c87-b208-f80c418aae24 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.491387906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6a794920-16fe-4c87-b208-f80c418aae24 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:59 embed-certs-114855 crio[715]: time="2023-07-17 20:17:59.491589800Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6a794920-16fe-4c87-b208-f80c418aae24 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	1f09aa9710f96       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   1f0c0b79b31df
	c3094a9649f15       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   13 minutes ago      Running             kube-proxy                0                   c274ffc0c7fe9
	9edc839c4e8e9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   9ee7541fb51d9
	20ad6b7297313       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   13 minutes ago      Running             kube-scheduler            2                   9ecfd0a904e7e
	b983a08dbeafc       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   13 minutes ago      Running             kube-apiserver            2                   76dee47a73691
	7a8fd7290abfe       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   13 minutes ago      Running             kube-controller-manager   2                   4b187605bba11
	6f2263eee0373       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   13 minutes ago      Running             etcd                      2                   bc879944812a2
	
	* 
	* ==> coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:35475 - 12124 "HINFO IN 5559246197945730497.3557320093662157327. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011324204s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-114855
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-114855
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=embed-certs-114855
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T20_04_27_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 20:04:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-114855
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 20:17:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 20:14:58 +0000   Mon, 17 Jul 2023 20:04:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 20:14:58 +0000   Mon, 17 Jul 2023 20:04:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 20:14:58 +0000   Mon, 17 Jul 2023 20:04:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 20:14:58 +0000   Mon, 17 Jul 2023 20:04:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.213
	  Hostname:    embed-certs-114855
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 467d878487bd48a9aeb3f4254d204a95
	  System UUID:                467d8784-87bd-48a9-aeb3-f4254d204a95
	  Boot ID:                    c8d572fc-29b3-45e1-abc8-5f78d915cd39
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-gq2b2                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-embed-certs-114855                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-embed-certs-114855             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-114855    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-bfvnl                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-114855             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-74d5c6b9c-jvfz8                100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-114855 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-114855 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-114855 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node embed-certs-114855 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node embed-certs-114855 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node embed-certs-114855 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node embed-certs-114855 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node embed-certs-114855 status is now: NodeReady
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-114855 event: Registered Node embed-certs-114855 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul17 19:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076625] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.548955] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.742032] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.171411] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.610597] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul17 19:59] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.182963] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.243175] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.158852] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.269689] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +18.991145] systemd-fstab-generator[917]: Ignoring "noauto" for root device
	[ +19.344202] kauditd_printk_skb: 29 callbacks suppressed
	[Jul17 20:04] systemd-fstab-generator[3562]: Ignoring "noauto" for root device
	[ +10.338751] systemd-fstab-generator[3890]: Ignoring "noauto" for root device
	[ +22.668487] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] <==
	* {"level":"info","ts":"2023-07-17T20:04:20.755Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"3f2bab300812805f","initial-advertise-peer-urls":["https://192.168.39.213:2380"],"listen-peer-urls":["https://192.168.39.213:2380"],"advertise-client-urls":["https://192.168.39.213:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.213:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T20:04:20.755Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T20:04:20.755Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.213:2380"}
	{"level":"info","ts":"2023-07-17T20:04:20.755Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.213:2380"}
	{"level":"info","ts":"2023-07-17T20:04:20.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f2bab300812805f is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-17T20:04:20.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f2bab300812805f became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-17T20:04:20.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f2bab300812805f received MsgPreVoteResp from 3f2bab300812805f at term 1"}
	{"level":"info","ts":"2023-07-17T20:04:20.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f2bab300812805f became candidate at term 2"}
	{"level":"info","ts":"2023-07-17T20:04:20.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f2bab300812805f received MsgVoteResp from 3f2bab300812805f at term 2"}
	{"level":"info","ts":"2023-07-17T20:04:20.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f2bab300812805f became leader at term 2"}
	{"level":"info","ts":"2023-07-17T20:04:20.987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3f2bab300812805f elected leader 3f2bab300812805f at term 2"}
	{"level":"info","ts":"2023-07-17T20:04:20.991Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:04:20.996Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3f2bab300812805f","local-member-attributes":"{Name:embed-certs-114855 ClientURLs:[https://192.168.39.213:2379]}","request-path":"/0/members/3f2bab300812805f/attributes","cluster-id":"d6b13109b9f74b4a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T20:04:20.996Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T20:04:20.999Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T20:04:21.001Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.213:2379"}
	{"level":"info","ts":"2023-07-17T20:04:21.001Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T20:04:21.001Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T20:04:21.008Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d6b13109b9f74b4a","local-member-id":"3f2bab300812805f","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:04:21.008Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:04:21.009Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:04:21.024Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T20:14:21.395Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":722}
	{"level":"info","ts":"2023-07-17T20:14:21.399Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":722,"took":"3.752756ms","hash":4247553167}
	{"level":"info","ts":"2023-07-17T20:14:21.399Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4247553167,"revision":722,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  20:17:59 up 19 min,  0 users,  load average: 0.09, 0.21, 0.24
	Linux embed-certs-114855 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] <==
	* E0717 20:14:24.390860       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:14:24.390919       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0717 20:14:24.390747       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:14:24.392256       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:15:23.282727       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.97.242:443: connect: connection refused
	I0717 20:15:23.282845       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 20:15:24.391921       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:15:24.392071       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:15:24.392138       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 20:15:24.392449       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:15:24.392573       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:15:24.393248       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:16:23.281951       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.97.242:443: connect: connection refused
	I0717 20:16:23.282369       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 20:17:23.281717       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.97.242:443: connect: connection refused
	I0717 20:17:23.281736       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 20:17:24.392921       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:17:24.393033       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:17:24.393066       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 20:17:24.394356       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:17:24.394462       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:17:24.394489       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] <==
	* W0717 20:11:40.257069       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:12:09.751826       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:12:10.268306       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:12:39.758356       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:12:40.277970       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:13:09.765422       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:13:10.290951       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:13:39.772053       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:13:40.301044       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:14:09.779310       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:14:10.310619       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:14:39.786756       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:14:40.320609       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:15:09.793070       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:15:10.332303       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:15:39.799948       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:15:40.341055       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:16:09.808071       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:16:10.351838       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:16:39.815675       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:16:40.363068       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:17:09.821705       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:17:10.372424       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:17:39.829294       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:17:40.381755       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] <==
	* I0717 20:04:45.185454       1 node.go:141] Successfully retrieved node IP: 192.168.39.213
	I0717 20:04:45.185652       1 server_others.go:110] "Detected node IP" address="192.168.39.213"
	I0717 20:04:45.185725       1 server_others.go:554] "Using iptables proxy"
	I0717 20:04:45.247298       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 20:04:45.247359       1 server_others.go:192] "Using iptables Proxier"
	I0717 20:04:45.248158       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 20:04:45.249489       1 server.go:658] "Version info" version="v1.27.3"
	I0717 20:04:45.249671       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 20:04:45.252764       1 config.go:188] "Starting service config controller"
	I0717 20:04:45.253755       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 20:04:45.253971       1 config.go:315] "Starting node config controller"
	I0717 20:04:45.253982       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 20:04:45.254580       1 config.go:97] "Starting endpoint slice config controller"
	I0717 20:04:45.254636       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 20:04:45.357432       1 shared_informer.go:318] Caches are synced for node config
	I0717 20:04:45.357491       1 shared_informer.go:318] Caches are synced for service config
	I0717 20:04:45.357584       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] <==
	* W0717 20:04:24.271567       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 20:04:24.271689       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 20:04:24.276112       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 20:04:24.276365       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 20:04:24.302713       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 20:04:24.302804       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 20:04:24.326540       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 20:04:24.326661       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 20:04:24.344213       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 20:04:24.344267       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 20:04:24.476620       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 20:04:24.476674       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 20:04:24.507367       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 20:04:24.507422       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 20:04:24.526763       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 20:04:24.526933       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 20:04:24.569899       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 20:04:24.570022       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 20:04:24.755359       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 20:04:24.755453       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 20:04:24.779120       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 20:04:24.779316       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 20:04:24.859302       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 20:04:24.859447       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0717 20:04:26.511621       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 19:58:50 UTC, ends at Mon 2023-07-17 20:18:00 UTC. --
	Jul 17 20:15:27 embed-certs-114855 kubelet[3897]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:15:27 embed-certs-114855 kubelet[3897]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:15:27 embed-certs-114855 kubelet[3897]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:15:34 embed-certs-114855 kubelet[3897]: E0717 20:15:34.155943    3897 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 20:15:34 embed-certs-114855 kubelet[3897]: E0717 20:15:34.156081    3897 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 20:15:34 embed-certs-114855 kubelet[3897]: E0717 20:15:34.156314    3897 kuberuntime_manager.go:1212] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hxz6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod metrics-server-74d5c6b9c-jvfz8_kube-system(f861e320-9125-4081-b043-c90d8b027f71): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 20:15:34 embed-certs-114855 kubelet[3897]: E0717 20:15:34.156361    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:15:48 embed-certs-114855 kubelet[3897]: E0717 20:15:48.131579    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:16:00 embed-certs-114855 kubelet[3897]: E0717 20:16:00.132639    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:16:13 embed-certs-114855 kubelet[3897]: E0717 20:16:13.131591    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:16:27 embed-certs-114855 kubelet[3897]: E0717 20:16:27.260580    3897 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:16:27 embed-certs-114855 kubelet[3897]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:16:27 embed-certs-114855 kubelet[3897]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:16:27 embed-certs-114855 kubelet[3897]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:16:28 embed-certs-114855 kubelet[3897]: E0717 20:16:28.130656    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:16:43 embed-certs-114855 kubelet[3897]: E0717 20:16:43.133507    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:16:58 embed-certs-114855 kubelet[3897]: E0717 20:16:58.131336    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:17:11 embed-certs-114855 kubelet[3897]: E0717 20:17:11.131598    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:17:22 embed-certs-114855 kubelet[3897]: E0717 20:17:22.131004    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:17:27 embed-certs-114855 kubelet[3897]: E0717 20:17:27.260564    3897 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:17:27 embed-certs-114855 kubelet[3897]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:17:27 embed-certs-114855 kubelet[3897]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:17:27 embed-certs-114855 kubelet[3897]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:17:37 embed-certs-114855 kubelet[3897]: E0717 20:17:37.131125    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:17:49 embed-certs-114855 kubelet[3897]: E0717 20:17:49.131447    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	
	* 
	* ==> storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] <==
	* I0717 20:04:45.064087       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 20:04:45.105889       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 20:04:45.106146       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 20:04:45.122633       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 20:04:45.124944       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-114855_e52dd33a-980c-4403-ba62-ffac53a0b460!
	I0717 20:04:45.134847       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b98ed5b9-8007-47ec-b3ec-aa2586e849ab", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-114855_e52dd33a-980c-4403-ba62-ffac53a0b460 became leader
	I0717 20:04:45.235341       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-114855_e52dd33a-980c-4403-ba62-ffac53a0b460!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-114855 -n embed-certs-114855
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-114855 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-jvfz8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-114855 describe pod metrics-server-74d5c6b9c-jvfz8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-114855 describe pod metrics-server-74d5c6b9c-jvfz8: exit status 1 (89.5342ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-jvfz8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-114855 describe pod metrics-server-74d5c6b9c-jvfz8: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (348.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-408472 -n no-preload-408472
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-07-17 20:18:15.499147645 +0000 UTC m=+5696.807843198
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-408472 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-408472 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.03µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-408472 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-408472 -n no-preload-408472
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-408472 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-408472 logs -n 25: (1.281009111s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-711413  | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC |                     |
	|         | default-k8s-diff-port-711413                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-891260             | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-891260                  | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-891260 --memory=2200 --alsologtostderr   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-891260 sudo                              | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p                                                     | disable-driver-mounts-178387 | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | disable-driver-mounts-178387                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-149000             | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-149000                              | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-408472                  | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-408472                                   | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-711413       | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 20:03 UTC |
	|         | default-k8s-diff-port-711413                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-114855            | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 19:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-114855                 | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC | 17 Jul 23 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-149000                              | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	| start   | -p auto-395471 --memory=3072                           | auto-395471                  | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 20:17:54
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 20:17:54.296860 1107949 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:17:54.296986 1107949 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:17:54.296995 1107949 out.go:309] Setting ErrFile to fd 2...
	I0717 20:17:54.296999 1107949 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:17:54.297216 1107949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 20:17:54.297921 1107949 out.go:303] Setting JSON to false
	I0717 20:17:54.299517 1107949 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":18025,"bootTime":1689607049,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 20:17:54.299703 1107949 start.go:138] virtualization: kvm guest
	I0717 20:17:54.304175 1107949 out.go:177] * [auto-395471] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 20:17:54.306710 1107949 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 20:17:54.306725 1107949 notify.go:220] Checking for updates...
	I0717 20:17:54.308644 1107949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:17:54.310607 1107949 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:17:54.312364 1107949 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 20:17:54.314324 1107949 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 20:17:54.316236 1107949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 20:17:54.318624 1107949 config.go:182] Loaded profile config "default-k8s-diff-port-711413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:17:54.318745 1107949 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:17:54.318846 1107949 config.go:182] Loaded profile config "no-preload-408472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:17:54.318994 1107949 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 20:17:54.360093 1107949 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 20:17:54.361771 1107949 start.go:298] selected driver: kvm2
	I0717 20:17:54.361798 1107949 start.go:880] validating driver "kvm2" against <nil>
	I0717 20:17:54.361815 1107949 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 20:17:54.362721 1107949 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 20:17:54.362810 1107949 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 20:17:54.379312 1107949 install.go:137] /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2 version is 1.30.1
	I0717 20:17:54.379385 1107949 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 20:17:54.379744 1107949 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 20:17:54.379802 1107949 cni.go:84] Creating CNI manager for ""
	I0717 20:17:54.379822 1107949 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 20:17:54.379831 1107949 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 20:17:54.379847 1107949 start_flags.go:319] config:
	{Name:auto-395471 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-395471 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni Feat
ureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:17:54.380009 1107949 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 20:17:54.382643 1107949 out.go:177] * Starting control plane node auto-395471 in cluster auto-395471
	I0717 20:17:54.384524 1107949 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 20:17:54.384618 1107949 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 20:17:54.384632 1107949 cache.go:57] Caching tarball of preloaded images
	I0717 20:17:54.384778 1107949 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 20:17:54.384791 1107949 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 20:17:54.384950 1107949 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/config.json ...
	I0717 20:17:54.384981 1107949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/config.json: {Name:mk7856fb3b600bdb285f0e435a53908c958f1add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:17:54.385171 1107949 start.go:365] acquiring machines lock for auto-395471: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 20:17:54.385206 1107949 start.go:369] acquired machines lock for "auto-395471" in 20.36µs
	I0717 20:17:54.385228 1107949 start.go:93] Provisioning new machine with config: &{Name:auto-395471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-395471 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:17:54.385294 1107949 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 20:17:54.387516 1107949 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 20:17:54.387731 1107949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:17:54.387783 1107949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:17:54.404869 1107949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46871
	I0717 20:17:54.405359 1107949 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:17:54.406140 1107949 main.go:141] libmachine: Using API Version  1
	I0717 20:17:54.406194 1107949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:17:54.406650 1107949 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:17:54.406875 1107949 main.go:141] libmachine: (auto-395471) Calling .GetMachineName
	I0717 20:17:54.407041 1107949 main.go:141] libmachine: (auto-395471) Calling .DriverName
	I0717 20:17:54.407245 1107949 start.go:159] libmachine.API.Create for "auto-395471" (driver="kvm2")
	I0717 20:17:54.407289 1107949 client.go:168] LocalClient.Create starting
	I0717 20:17:54.407341 1107949 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem
	I0717 20:17:54.407385 1107949 main.go:141] libmachine: Decoding PEM data...
	I0717 20:17:54.407403 1107949 main.go:141] libmachine: Parsing certificate...
	I0717 20:17:54.407479 1107949 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem
	I0717 20:17:54.407498 1107949 main.go:141] libmachine: Decoding PEM data...
	I0717 20:17:54.407518 1107949 main.go:141] libmachine: Parsing certificate...
	I0717 20:17:54.407539 1107949 main.go:141] libmachine: Running pre-create checks...
	I0717 20:17:54.407552 1107949 main.go:141] libmachine: (auto-395471) Calling .PreCreateCheck
	I0717 20:17:54.407950 1107949 main.go:141] libmachine: (auto-395471) Calling .GetConfigRaw
	I0717 20:17:54.408465 1107949 main.go:141] libmachine: Creating machine...
	I0717 20:17:54.408480 1107949 main.go:141] libmachine: (auto-395471) Calling .Create
	I0717 20:17:54.408647 1107949 main.go:141] libmachine: (auto-395471) Creating KVM machine...
	I0717 20:17:54.410449 1107949 main.go:141] libmachine: (auto-395471) DBG | found existing default KVM network
	I0717 20:17:54.412098 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:17:54.411886 1107971 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fc:80:4b} reservation:<nil>}
	I0717 20:17:54.413533 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:17:54.413425 1107971 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a2890}
	I0717 20:17:54.420614 1107949 main.go:141] libmachine: (auto-395471) DBG | trying to create private KVM network mk-auto-395471 192.168.50.0/24...
	I0717 20:17:54.509830 1107949 main.go:141] libmachine: (auto-395471) DBG | private KVM network mk-auto-395471 192.168.50.0/24 created
	I0717 20:17:54.509867 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:17:54.509791 1107971 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 20:17:54.509882 1107949 main.go:141] libmachine: (auto-395471) Setting up store path in /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471 ...
	I0717 20:17:54.509899 1107949 main.go:141] libmachine: (auto-395471) Building disk image from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 20:17:54.510031 1107949 main.go:141] libmachine: (auto-395471) Downloading /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0717 20:17:54.772767 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:17:54.772559 1107971 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471/id_rsa...
	I0717 20:17:54.857222 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:17:54.857090 1107971 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471/auto-395471.rawdisk...
	I0717 20:17:54.857291 1107949 main.go:141] libmachine: (auto-395471) DBG | Writing magic tar header
	I0717 20:17:54.857315 1107949 main.go:141] libmachine: (auto-395471) DBG | Writing SSH key tar header
	I0717 20:17:54.857339 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:17:54.857277 1107971 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471 ...
	I0717 20:17:54.857509 1107949 main.go:141] libmachine: (auto-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471
	I0717 20:17:54.857579 1107949 main.go:141] libmachine: (auto-395471) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471 (perms=drwx------)
	I0717 20:17:54.857597 1107949 main.go:141] libmachine: (auto-395471) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines (perms=drwxr-xr-x)
	I0717 20:17:54.857614 1107949 main.go:141] libmachine: (auto-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines
	I0717 20:17:54.857633 1107949 main.go:141] libmachine: (auto-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 20:17:54.857648 1107949 main.go:141] libmachine: (auto-395471) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube (perms=drwxr-xr-x)
	I0717 20:17:54.857669 1107949 main.go:141] libmachine: (auto-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725
	I0717 20:17:54.857682 1107949 main.go:141] libmachine: (auto-395471) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725 (perms=drwxrwxr-x)
	I0717 20:17:54.857700 1107949 main.go:141] libmachine: (auto-395471) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 20:17:54.857715 1107949 main.go:141] libmachine: (auto-395471) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 20:17:54.857732 1107949 main.go:141] libmachine: (auto-395471) Creating domain...
	I0717 20:17:54.857754 1107949 main.go:141] libmachine: (auto-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 20:17:54.857784 1107949 main.go:141] libmachine: (auto-395471) DBG | Checking permissions on dir: /home/jenkins
	I0717 20:17:54.857804 1107949 main.go:141] libmachine: (auto-395471) DBG | Checking permissions on dir: /home
	I0717 20:17:54.857822 1107949 main.go:141] libmachine: (auto-395471) DBG | Skipping /home - not owner
	I0717 20:17:54.859009 1107949 main.go:141] libmachine: (auto-395471) define libvirt domain using xml: 
	I0717 20:17:54.859038 1107949 main.go:141] libmachine: (auto-395471) <domain type='kvm'>
	I0717 20:17:54.859087 1107949 main.go:141] libmachine: (auto-395471)   <name>auto-395471</name>
	I0717 20:17:54.859113 1107949 main.go:141] libmachine: (auto-395471)   <memory unit='MiB'>3072</memory>
	I0717 20:17:54.859128 1107949 main.go:141] libmachine: (auto-395471)   <vcpu>2</vcpu>
	I0717 20:17:54.859139 1107949 main.go:141] libmachine: (auto-395471)   <features>
	I0717 20:17:54.859160 1107949 main.go:141] libmachine: (auto-395471)     <acpi/>
	I0717 20:17:54.859176 1107949 main.go:141] libmachine: (auto-395471)     <apic/>
	I0717 20:17:54.859186 1107949 main.go:141] libmachine: (auto-395471)     <pae/>
	I0717 20:17:54.859197 1107949 main.go:141] libmachine: (auto-395471)     
	I0717 20:17:54.859207 1107949 main.go:141] libmachine: (auto-395471)   </features>
	I0717 20:17:54.859224 1107949 main.go:141] libmachine: (auto-395471)   <cpu mode='host-passthrough'>
	I0717 20:17:54.859235 1107949 main.go:141] libmachine: (auto-395471)   
	I0717 20:17:54.859242 1107949 main.go:141] libmachine: (auto-395471)   </cpu>
	I0717 20:17:54.859298 1107949 main.go:141] libmachine: (auto-395471)   <os>
	I0717 20:17:54.859327 1107949 main.go:141] libmachine: (auto-395471)     <type>hvm</type>
	I0717 20:17:54.859341 1107949 main.go:141] libmachine: (auto-395471)     <boot dev='cdrom'/>
	I0717 20:17:54.859351 1107949 main.go:141] libmachine: (auto-395471)     <boot dev='hd'/>
	I0717 20:17:54.859374 1107949 main.go:141] libmachine: (auto-395471)     <bootmenu enable='no'/>
	I0717 20:17:54.859388 1107949 main.go:141] libmachine: (auto-395471)   </os>
	I0717 20:17:54.859399 1107949 main.go:141] libmachine: (auto-395471)   <devices>
	I0717 20:17:54.859409 1107949 main.go:141] libmachine: (auto-395471)     <disk type='file' device='cdrom'>
	I0717 20:17:54.859424 1107949 main.go:141] libmachine: (auto-395471)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471/boot2docker.iso'/>
	I0717 20:17:54.859441 1107949 main.go:141] libmachine: (auto-395471)       <target dev='hdc' bus='scsi'/>
	I0717 20:17:54.859451 1107949 main.go:141] libmachine: (auto-395471)       <readonly/>
	I0717 20:17:54.859459 1107949 main.go:141] libmachine: (auto-395471)     </disk>
	I0717 20:17:54.859470 1107949 main.go:141] libmachine: (auto-395471)     <disk type='file' device='disk'>
	I0717 20:17:54.859481 1107949 main.go:141] libmachine: (auto-395471)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 20:17:54.859519 1107949 main.go:141] libmachine: (auto-395471)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471/auto-395471.rawdisk'/>
	I0717 20:17:54.859536 1107949 main.go:141] libmachine: (auto-395471)       <target dev='hda' bus='virtio'/>
	I0717 20:17:54.859547 1107949 main.go:141] libmachine: (auto-395471)     </disk>
	I0717 20:17:54.859557 1107949 main.go:141] libmachine: (auto-395471)     <interface type='network'>
	I0717 20:17:54.859567 1107949 main.go:141] libmachine: (auto-395471)       <source network='mk-auto-395471'/>
	I0717 20:17:54.859577 1107949 main.go:141] libmachine: (auto-395471)       <model type='virtio'/>
	I0717 20:17:54.859586 1107949 main.go:141] libmachine: (auto-395471)     </interface>
	I0717 20:17:54.859596 1107949 main.go:141] libmachine: (auto-395471)     <interface type='network'>
	I0717 20:17:54.859614 1107949 main.go:141] libmachine: (auto-395471)       <source network='default'/>
	I0717 20:17:54.859633 1107949 main.go:141] libmachine: (auto-395471)       <model type='virtio'/>
	I0717 20:17:54.859646 1107949 main.go:141] libmachine: (auto-395471)     </interface>
	I0717 20:17:54.859654 1107949 main.go:141] libmachine: (auto-395471)     <serial type='pty'>
	I0717 20:17:54.859667 1107949 main.go:141] libmachine: (auto-395471)       <target port='0'/>
	I0717 20:17:54.859679 1107949 main.go:141] libmachine: (auto-395471)     </serial>
	I0717 20:17:54.859694 1107949 main.go:141] libmachine: (auto-395471)     <console type='pty'>
	I0717 20:17:54.859706 1107949 main.go:141] libmachine: (auto-395471)       <target type='serial' port='0'/>
	I0717 20:17:54.859716 1107949 main.go:141] libmachine: (auto-395471)     </console>
	I0717 20:17:54.859727 1107949 main.go:141] libmachine: (auto-395471)     <rng model='virtio'>
	I0717 20:17:54.859752 1107949 main.go:141] libmachine: (auto-395471)       <backend model='random'>/dev/random</backend>
	I0717 20:17:54.859778 1107949 main.go:141] libmachine: (auto-395471)     </rng>
	I0717 20:17:54.859788 1107949 main.go:141] libmachine: (auto-395471)     
	I0717 20:17:54.859802 1107949 main.go:141] libmachine: (auto-395471)     
	I0717 20:17:54.859815 1107949 main.go:141] libmachine: (auto-395471)   </devices>
	I0717 20:17:54.859826 1107949 main.go:141] libmachine: (auto-395471) </domain>
	I0717 20:17:54.859837 1107949 main.go:141] libmachine: (auto-395471) 
	I0717 20:17:54.864676 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:9a:1f:0c in network default
	I0717 20:17:54.865411 1107949 main.go:141] libmachine: (auto-395471) Ensuring networks are active...
	I0717 20:17:54.865441 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:17:54.866392 1107949 main.go:141] libmachine: (auto-395471) Ensuring network default is active
	I0717 20:17:54.866680 1107949 main.go:141] libmachine: (auto-395471) Ensuring network mk-auto-395471 is active
	I0717 20:17:54.867275 1107949 main.go:141] libmachine: (auto-395471) Getting domain xml...
	I0717 20:17:54.867992 1107949 main.go:141] libmachine: (auto-395471) Creating domain...
	I0717 20:17:56.212011 1107949 main.go:141] libmachine: (auto-395471) Waiting to get IP...
	I0717 20:17:56.213041 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:17:56.213589 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:17:56.213650 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:17:56.213550 1107971 retry.go:31] will retry after 259.850403ms: waiting for machine to come up
	I0717 20:17:56.475396 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:17:56.476105 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:17:56.476130 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:17:56.476043 1107971 retry.go:31] will retry after 345.73204ms: waiting for machine to come up
	I0717 20:17:56.823835 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:17:56.824453 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:17:56.824483 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:17:56.824407 1107971 retry.go:31] will retry after 347.172469ms: waiting for machine to come up
	I0717 20:17:57.173078 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:17:57.173665 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:17:57.173698 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:17:57.173586 1107971 retry.go:31] will retry after 576.578426ms: waiting for machine to come up
	I0717 20:17:57.751620 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:17:57.752092 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:17:57.752122 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:17:57.752048 1107971 retry.go:31] will retry after 587.836147ms: waiting for machine to come up
	I0717 20:17:58.341833 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:17:58.342399 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:17:58.342437 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:17:58.342352 1107971 retry.go:31] will retry after 635.461535ms: waiting for machine to come up
	I0717 20:17:58.981366 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:17:58.981984 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:17:58.982017 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:17:58.981935 1107971 retry.go:31] will retry after 1.190299432s: waiting for machine to come up
	I0717 20:18:00.174236 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:00.174835 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:18:00.174869 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:18:00.174754 1107971 retry.go:31] will retry after 1.04186379s: waiting for machine to come up
	I0717 20:18:01.218909 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:01.219456 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:18:01.219485 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:18:01.219428 1107971 retry.go:31] will retry after 1.579512917s: waiting for machine to come up
	I0717 20:18:02.800808 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:02.801320 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:18:02.801377 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:18:02.801263 1107971 retry.go:31] will retry after 1.49827059s: waiting for machine to come up
	I0717 20:18:04.301807 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:04.302475 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:18:04.302512 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:18:04.302379 1107971 retry.go:31] will retry after 1.906360897s: waiting for machine to come up
	I0717 20:18:06.211070 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:06.211592 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:18:06.211619 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:18:06.211543 1107971 retry.go:31] will retry after 2.42409236s: waiting for machine to come up
	I0717 20:18:08.639074 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:08.639581 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:18:08.639614 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:18:08.639534 1107971 retry.go:31] will retry after 3.941080662s: waiting for machine to come up
	I0717 20:18:12.582830 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:12.583454 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find current IP address of domain auto-395471 in network mk-auto-395471
	I0717 20:18:12.583485 1107949 main.go:141] libmachine: (auto-395471) DBG | I0717 20:18:12.583413 1107971 retry.go:31] will retry after 3.585914963s: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 19:58:06 UTC, ends at Mon 2023-07-17 20:18:16 UTC. --
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.086486873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=96e6d47d-ed1e-4c65-b61c-ee5acc445130 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.086562095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=96e6d47d-ed1e-4c65-b61c-ee5acc445130 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.086790339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=96e6d47d-ed1e-4c65-b61c-ee5acc445130 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.127559610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4ea75fe8-c671-4110-959e-a92cbd248f2d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.127671243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4ea75fe8-c671-4110-959e-a92cbd248f2d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.131364938Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4ea75fe8-c671-4110-959e-a92cbd248f2d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.177728168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1ff4098e-9f96-47df-a6e1-6cdda79bbf05 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.177800228Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1ff4098e-9f96-47df-a6e1-6cdda79bbf05 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.178005873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1ff4098e-9f96-47df-a6e1-6cdda79bbf05 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.180238859Z" level=debug msg="Request: &ImageStatusRequest{Image:&ImageSpec{Image:fake.domain/registry.k8s.io/echoserver:1.4,Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T19:58:59.076575679Z,kubernetes.io/config.source: api,},},Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=8e581774-7966-4b1d-b787-3cea717550e2 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.180296227Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:30" id=8e581774-7966-4b1d-b787-3cea717550e2 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.180517312Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]fake.domain/registry.k8s.io/echoserver:1.4\"" file="storage/storage_transport.go:185"
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.180622968Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]fake.domain/registry.k8s.io/echoserver:1.4\" does not resolve to an image ID" file="storage/storage_reference.go:147"
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.180660739Z" level=debug msg="Can't find fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:47" id=8e581774-7966-4b1d-b787-3cea717550e2 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.180682139Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" file="server/image_status.go:90" id=8e581774-7966-4b1d-b787-3cea717550e2 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.180705288Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=8e581774-7966-4b1d-b787-3cea717550e2 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.218270415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3ca3b171-3180-4ba4-91b4-4689c543f6cd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.218340017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3ca3b171-3180-4ba4-91b4-4689c543f6cd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.218664306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3ca3b171-3180-4ba4-91b4-4689c543f6cd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.260580049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7fa1e181-2f53-49e2-8eb1-358b46cc5c9d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.260679205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7fa1e181-2f53-49e2-8eb1-358b46cc5c9d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.260950417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7fa1e181-2f53-49e2-8eb1-358b46cc5c9d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.301240906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3dac2f9d-2e90-4d0c-bdd5-f9feef3964c5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.301311336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3dac2f9d-2e90-4d0c-bdd5-f9feef3964c5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:18:16 no-preload-408472 crio[723]: time="2023-07-17 20:18:16.301576433Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689623971491805613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1dd1f8bfc6ea5fc2d525f06a7f97e022380348ee01a95515fe2f2ce720db01,PodSandboxId:f3cb758549447a012fdd15abdb36701951fc4124b28d80f35ab3b605e33c55b7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623946896235359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c8e4faa-fb22-4e2f-a383-de7b5122346b,},Annotations:map[string]string{io.kubernetes.container.hash: a627707f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce,PodSandboxId:a33bbe47c7157f306c2cccc2a008e4a8da0f139d93938630c79f53f73f354f49,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689623945137959545,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9mxdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbff09fd-436d-4208-9187-b6312aa1c223,},Annotations:map[string]string{io.kubernetes.container.hash: dd58a968,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a,PodSandboxId:47139040a14d65679324a9cf0e054dd8bc5674f553198db965b9d67d3a8b2a93,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689623940422461512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cntdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8653567b-a
bf9-468c-a030-45fc53fa0cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee9a8a6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379,PodSandboxId:ff5ad5d7dd32f07cfc1c1208551ac8f3f4b8a46bfe51e0c02b2a396b98e4739e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689623940308828471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1aefd8ef-dec
9-4e37-8648-8e5a62622cd3,},Annotations:map[string]string{io.kubernetes.container.hash: 81256f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc,PodSandboxId:5b4b09ff53722684ddeca8822d7d45236df17428043c8548bbd076e384d8527f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689623932754672917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7764e32ea62c4e843571c1c8b26e43,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 871ad5da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9,PodSandboxId:57e1bf1c090b7c994b8721816c39394f5f4ebe3efa6b4ee27f8e616ceb0c2504,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689623932584128267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2e80a485a65dc98d5f01a92b53c5fa5,}
,Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5,PodSandboxId:4be199ed8e485981d4a0f5659c2dfd25f5d143ddab31ff6dd001a4d53ec1313c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689623932544937261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576be588fe38e2234a5c6f2fb28de233,},Annotation
s:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3,PodSandboxId:37d2b34c5db08f953540e06b283123cadf97d1120d2186b7b73e68f0cd97da2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689623932255864684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-408472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c8bc648017d2e10f87a375e6d180ad7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 6efb3e9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3dac2f9d-2e90-4d0c-bdd5-f9feef3964c5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	434d3b3c5d986       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       3                   ff5ad5d7dd32f
	1f1dd1f8bfc6e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   f3cb758549447
	63dc2a3f8ace5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      19 minutes ago      Running             coredns                   1                   a33bbe47c7157
	c8746d568c4d0       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      19 minutes ago      Running             kube-proxy                1                   47139040a14d6
	cb2ddc8935dcd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       2                   ff5ad5d7dd32f
	4a90287e5fc16       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      19 minutes ago      Running             etcd                      1                   5b4b09ff53722
	2ba1ed857458d       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      19 minutes ago      Running             kube-controller-manager   1                   57e1bf1c090b7
	0db29fec08ce9       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      19 minutes ago      Running             kube-scheduler            1                   4be199ed8e485
	eec27ef53d6bc       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      19 minutes ago      Running             kube-apiserver            1                   37d2b34c5db08
	
	* 
	* ==> coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53754 - 13599 "HINFO IN 595208134056070901.5201549753386648626. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.028425985s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-408472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-408472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=no-preload-408472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T19_49_48_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:49:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-408472
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 20:18:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 20:14:46 +0000   Mon, 17 Jul 2023 19:49:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 20:14:46 +0000   Mon, 17 Jul 2023 19:49:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 20:14:46 +0000   Mon, 17 Jul 2023 19:49:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 20:14:46 +0000   Mon, 17 Jul 2023 19:59:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.65
	  Hostname:    no-preload-408472
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 4af696079b3a42e08bf5e45b6c9af525
	  System UUID:                4af69607-9b3a-42e0-8bf5-e45b6c9af525
	  Boot ID:                    ad4ad896-f9d0-475d-9d7f-ee3c3d9b501b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5d78c9869d-9mxdj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-408472                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-408472             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-408472    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-cntdn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-408472             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-74d5c6b9c-hnngh               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-408472 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-408472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-408472 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node no-preload-408472 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-408472 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-408472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-408472 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-408472 event: Registered Node no-preload-408472 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-408472 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-408472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-408472 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-408472 event: Registered Node no-preload-408472 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul17 19:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073696] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul17 19:58] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.604245] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.146173] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.515395] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.291847] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.118209] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.165153] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.111246] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.250866] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[ +31.372713] systemd-fstab-generator[1239]: Ignoring "noauto" for root device
	[Jul17 19:59] kauditd_printk_skb: 29 callbacks suppressed
	
	* 
	* ==> etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] <==
	* {"level":"warn","ts":"2023-07-17T19:59:11.446Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.141Z","time spent":"305.274495ms","remote":"127.0.0.1:56200","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:607 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2023-07-17T19:59:11.451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"315.827526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" ","response":"range_response_count:1 size:6343"}
	{"level":"info","ts":"2023-07-17T19:59:11.451Z","caller":"traceutil/trace.go:171","msg":"trace[842066160] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-no-preload-408472; range_end:; response_count:1; response_revision:612; }","duration":"315.978161ms","start":"2023-07-17T19:59:11.135Z","end":"2023-07-17T19:59:11.451Z","steps":["trace[842066160] 'agreement among raft nodes before linearized reading'  (duration: 315.739367ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.451Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.135Z","time spent":"316.213528ms","remote":"127.0.0.1:56136","response type":"/etcdserverpb.KV/Range","request count":0,"request size":70,"response count":1,"response size":6366,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" "}
	{"level":"warn","ts":"2023-07-17T19:59:11.451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.005177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5182"}
	{"level":"info","ts":"2023-07-17T19:59:11.451Z","caller":"traceutil/trace.go:171","msg":"trace[1619904632] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:612; }","duration":"305.084461ms","start":"2023-07-17T19:59:11.146Z","end":"2023-07-17T19:59:11.451Z","steps":["trace[1619904632] 'agreement among raft nodes before linearized reading'  (duration: 304.96695ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.452Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.146Z","time spent":"305.155743ms","remote":"127.0.0.1:56200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":5205,"request content":"key:\"/registry/deployments/kube-system/metrics-server\" "}
	{"level":"warn","ts":"2023-07-17T19:59:11.452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.373749ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4133"}
	{"level":"info","ts":"2023-07-17T19:59:11.452Z","caller":"traceutil/trace.go:171","msg":"trace[400271151] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:612; }","duration":"305.530541ms","start":"2023-07-17T19:59:11.146Z","end":"2023-07-17T19:59:11.452Z","steps":["trace[400271151] 'agreement among raft nodes before linearized reading'  (duration: 305.322037ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.452Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.146Z","time spent":"305.574073ms","remote":"127.0.0.1:56200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4156,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"warn","ts":"2023-07-17T19:59:11.452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.521193ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-no-preload-408472\" ","response":"range_response_count:1 size:6727"}
	{"level":"info","ts":"2023-07-17T19:59:11.452Z","caller":"traceutil/trace.go:171","msg":"trace[575452763] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-no-preload-408472; range_end:; response_count:1; response_revision:612; }","duration":"149.584784ms","start":"2023-07-17T19:59:11.303Z","end":"2023-07-17T19:59:11.452Z","steps":["trace[575452763] 'agreement among raft nodes before linearized reading'  (duration: 149.48203ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.879Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.160787ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15486904287370853960 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" mod_revision:520 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" value_size:6252 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-07-17T19:59:11.879Z","caller":"traceutil/trace.go:171","msg":"trace[387970279] linearizableReadLoop","detail":"{readStateIndex:656; appliedIndex:655; }","duration":"396.454267ms","start":"2023-07-17T19:59:11.483Z","end":"2023-07-17T19:59:11.879Z","steps":["trace[387970279] 'read index received'  (duration: 236.277283ms)","trace[387970279] 'applied index is now lower than readState.Index'  (duration: 160.175907ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T19:59:11.879Z","caller":"traceutil/trace.go:171","msg":"trace[1839751726] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"398.595379ms","start":"2023-07-17T19:59:11.481Z","end":"2023-07-17T19:59:11.879Z","steps":["trace[1839751726] 'process raft request'  (duration: 238.042921ms)","trace[1839751726] 'compare'  (duration: 159.987931ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T19:59:11.880Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.481Z","time spent":"398.657132ms","remote":"127.0.0.1:56136","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6328,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" mod_revision:520 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" value_size:6252 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-408472\" > >"}
	{"level":"warn","ts":"2023-07-17T19:59:11.880Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"396.944796ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T19:59:11.880Z","caller":"traceutil/trace.go:171","msg":"trace[2087394543] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:613; }","duration":"396.968671ms","start":"2023-07-17T19:59:11.483Z","end":"2023-07-17T19:59:11.880Z","steps":["trace[2087394543] 'agreement among raft nodes before linearized reading'  (duration: 396.90471ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T19:59:11.880Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T19:59:11.483Z","time spent":"397.002165ms","remote":"127.0.0.1:56098","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-07-17T20:08:56.303Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":860}
	{"level":"info","ts":"2023-07-17T20:08:56.306Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":860,"took":"2.515863ms","hash":3076115344}
	{"level":"info","ts":"2023-07-17T20:08:56.306Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3076115344,"revision":860,"compact-revision":-1}
	{"level":"info","ts":"2023-07-17T20:13:56.312Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1102}
	{"level":"info","ts":"2023-07-17T20:13:56.315Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1102,"took":"1.869236ms","hash":2161828363}
	{"level":"info","ts":"2023-07-17T20:13:56.315Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2161828363,"revision":1102,"compact-revision":860}
	
	* 
	* ==> kernel <==
	*  20:18:16 up 20 min,  0 users,  load average: 0.91, 0.30, 0.19
	Linux no-preload-408472 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] <==
	* E0717 20:13:59.099994       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:13:59.101194       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:14:57.945890       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.153.136:443: connect: connection refused
	I0717 20:14:57.945947       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 20:14:59.101034       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:14:59.101213       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:14:59.101297       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 20:14:59.101492       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:14:59.101585       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:14:59.103374       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:15:57.945800       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.153.136:443: connect: connection refused
	I0717 20:15:57.945882       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 20:16:57.944737       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.153.136:443: connect: connection refused
	I0717 20:16:57.944798       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 20:16:59.102216       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:16:59.102364       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:16:59.102473       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 20:16:59.104645       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:16:59.104740       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:16:59.104750       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:17:57.945758       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.153.136:443: connect: connection refused
	I0717 20:17:57.945868       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] <==
	* W0717 20:12:11.365087       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:12:40.887522       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:12:41.374353       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:13:10.893592       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:13:11.385858       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:13:40.902983       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:13:41.394297       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:14:10.909639       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:14:11.404115       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:14:40.917624       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:14:41.415376       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:15:10.924917       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:15:11.427114       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:15:40.931167       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:15:41.437233       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:16:10.939936       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:16:11.452606       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:16:40.947325       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:16:41.462890       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:17:10.954531       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:17:11.472332       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:17:40.961226       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:17:41.485639       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:18:10.971912       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:18:11.496639       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] <==
	* I0717 19:59:00.609142       1 node.go:141] Successfully retrieved node IP: 192.168.61.65
	I0717 19:59:00.609316       1 server_others.go:110] "Detected node IP" address="192.168.61.65"
	I0717 19:59:00.609349       1 server_others.go:554] "Using iptables proxy"
	I0717 19:59:00.649064       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 19:59:00.649143       1 server_others.go:192] "Using iptables Proxier"
	I0717 19:59:00.649186       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:59:00.649775       1 server.go:658] "Version info" version="v1.27.3"
	I0717 19:59:00.650023       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:59:00.651356       1 config.go:188] "Starting service config controller"
	I0717 19:59:00.651553       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 19:59:00.651599       1 config.go:97] "Starting endpoint slice config controller"
	I0717 19:59:00.651625       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 19:59:00.653297       1 config.go:315] "Starting node config controller"
	I0717 19:59:00.653344       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 19:59:00.751849       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 19:59:00.751916       1 shared_informer.go:318] Caches are synced for service config
	I0717 19:59:00.754332       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] <==
	* I0717 19:58:55.114636       1 serving.go:348] Generated self-signed cert in-memory
	W0717 19:58:57.977031       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 19:58:57.977110       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:58:57.977140       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 19:58:57.977164       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 19:58:58.031371       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 19:58:58.034846       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:58:58.038624       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:58:58.038684       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:58:58.039661       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 19:58:58.039744       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 19:58:58.240122       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 19:58:06 UTC, ends at Mon 2023-07-17 20:18:16 UTC. --
	Jul 17 20:15:42 no-preload-408472 kubelet[1245]: E0717 20:15:42.181179    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:15:51 no-preload-408472 kubelet[1245]: E0717 20:15:51.205956    1245 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:15:51 no-preload-408472 kubelet[1245]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:15:51 no-preload-408472 kubelet[1245]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:15:51 no-preload-408472 kubelet[1245]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:15:54 no-preload-408472 kubelet[1245]: E0717 20:15:54.181841    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:16:08 no-preload-408472 kubelet[1245]: E0717 20:16:08.181862    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:16:22 no-preload-408472 kubelet[1245]: E0717 20:16:22.185141    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:16:34 no-preload-408472 kubelet[1245]: E0717 20:16:34.181980    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:16:45 no-preload-408472 kubelet[1245]: E0717 20:16:45.181294    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:16:51 no-preload-408472 kubelet[1245]: E0717 20:16:51.201664    1245 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:16:51 no-preload-408472 kubelet[1245]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:16:51 no-preload-408472 kubelet[1245]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:16:51 no-preload-408472 kubelet[1245]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:17:00 no-preload-408472 kubelet[1245]: E0717 20:17:00.180896    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:17:14 no-preload-408472 kubelet[1245]: E0717 20:17:14.181475    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:17:26 no-preload-408472 kubelet[1245]: E0717 20:17:26.181783    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:17:39 no-preload-408472 kubelet[1245]: E0717 20:17:39.182792    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:17:50 no-preload-408472 kubelet[1245]: E0717 20:17:50.183731    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:17:51 no-preload-408472 kubelet[1245]: E0717 20:17:51.203091    1245 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:17:51 no-preload-408472 kubelet[1245]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:17:51 no-preload-408472 kubelet[1245]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:17:51 no-preload-408472 kubelet[1245]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:18:02 no-preload-408472 kubelet[1245]: E0717 20:18:02.182276    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	Jul 17 20:18:16 no-preload-408472 kubelet[1245]: E0717 20:18:16.181205    1245 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hnngh" podUID=dfff837e-dbba-4795-935d-9562d2744169
	
	* 
	* ==> storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] <==
	* I0717 19:59:31.729050       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:59:31.751948       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:59:31.752229       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 19:59:49.160556       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 19:59:49.161290       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-408472_25383115-a75d-491b-ab63-40bb6346fdc9!
	I0717 19:59:49.163828       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59460376-84b8-4c43-8c5e-9241ae256687", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-408472_25383115-a75d-491b-ab63-40bb6346fdc9 became leader
	I0717 19:59:49.263720       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-408472_25383115-a75d-491b-ab63-40bb6346fdc9!
	
	* 
	* ==> storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] <==
	* I0717 19:59:00.490867       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 19:59:30.495953       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-408472 -n no-preload-408472
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-408472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-hnngh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-408472 describe pod metrics-server-74d5c6b9c-hnngh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-408472 describe pod metrics-server-74d5c6b9c-hnngh: exit status 1 (87.554851ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-hnngh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-408472 describe pod metrics-server-74d5c6b9c-hnngh: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (348.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (423.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 20:13:01.330789 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-711413 -n default-k8s-diff-port-711413
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-07-17 20:19:34.326042637 +0000 UTC m=+5775.634738193
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-711413 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-711413 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.453µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-711413 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-711413 -n default-k8s-diff-port-711413
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-711413 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-711413 logs -n 25: (1.336372661s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-891260             | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-891260                  | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-891260 --memory=2200 --alsologtostderr   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-891260 sudo                              | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p                                                     | disable-driver-mounts-178387 | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | disable-driver-mounts-178387                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-149000             | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-149000                              | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-408472                  | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-408472                                   | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-711413       | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 20:03 UTC |
	|         | default-k8s-diff-port-711413                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-114855            | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 19:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-114855                 | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC | 17 Jul 23 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-149000                              | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	| start   | -p auto-395471 --memory=3072                           | auto-395471                  | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-408472                                   | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	| start   | -p flannel-395471                                      | flannel-395471               | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=flannel --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 20:18:18
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 20:18:18.814340 1108527 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:18:18.814479 1108527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:18:18.814491 1108527 out.go:309] Setting ErrFile to fd 2...
	I0717 20:18:18.814495 1108527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:18:18.814732 1108527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 20:18:18.815434 1108527 out.go:303] Setting JSON to false
	I0717 20:18:18.816530 1108527 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":18050,"bootTime":1689607049,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 20:18:18.816606 1108527 start.go:138] virtualization: kvm guest
	I0717 20:18:18.821363 1108527 out.go:177] * [flannel-395471] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 20:18:18.823805 1108527 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 20:18:18.823882 1108527 notify.go:220] Checking for updates...
	I0717 20:18:18.825887 1108527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:18:18.827858 1108527 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:18:18.829743 1108527 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 20:18:18.831704 1108527 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 20:18:18.833537 1108527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 20:18:18.835901 1108527 config.go:182] Loaded profile config "auto-395471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:18:18.836055 1108527 config.go:182] Loaded profile config "default-k8s-diff-port-711413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:18:18.836619 1108527 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:18:18.837393 1108527 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 20:18:18.881470 1108527 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 20:18:18.883664 1108527 start.go:298] selected driver: kvm2
	I0717 20:18:18.883692 1108527 start.go:880] validating driver "kvm2" against <nil>
	I0717 20:18:18.883706 1108527 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 20:18:18.884458 1108527 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 20:18:18.884547 1108527 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 20:18:18.900802 1108527 install.go:137] /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2 version is 1.30.1
	I0717 20:18:18.900864 1108527 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 20:18:18.901144 1108527 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 20:18:18.901225 1108527 cni.go:84] Creating CNI manager for "flannel"
	I0717 20:18:18.901238 1108527 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0717 20:18:18.901251 1108527 start_flags.go:319] config:
	{Name:flannel-395471 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:flannel-395471 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cn
i FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:18:18.901455 1108527 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 20:18:18.904176 1108527 out.go:177] * Starting control plane node flannel-395471 in cluster flannel-395471
	I0717 20:18:18.906105 1108527 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 20:18:18.906171 1108527 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 20:18:18.906187 1108527 cache.go:57] Caching tarball of preloaded images
	I0717 20:18:18.906288 1108527 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 20:18:18.906304 1108527 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 20:18:18.906465 1108527 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/config.json ...
	I0717 20:18:18.906491 1108527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/config.json: {Name:mk6990a83187f79e4aacfd2568bfb393af746c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:18:18.906678 1108527 start.go:365] acquiring machines lock for flannel-395471: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 20:18:18.906717 1108527 start.go:369] acquired machines lock for "flannel-395471" in 20.234µs
	I0717 20:18:18.906743 1108527 start.go:93] Provisioning new machine with config: &{Name:flannel-395471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:flannel-39547
1 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:18:18.906848 1108527 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 20:18:16.170704 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.171313 1107949 main.go:141] libmachine: (auto-395471) Found IP for machine: 192.168.50.3
	I0717 20:18:16.171344 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has current primary IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.171353 1107949 main.go:141] libmachine: (auto-395471) Reserving static IP address...
	I0717 20:18:16.171827 1107949 main.go:141] libmachine: (auto-395471) DBG | unable to find host DHCP lease matching {name: "auto-395471", mac: "52:54:00:b3:ba:8c", ip: "192.168.50.3"} in network mk-auto-395471
	I0717 20:18:16.267620 1107949 main.go:141] libmachine: (auto-395471) DBG | Getting to WaitForSSH function...
	I0717 20:18:16.267674 1107949 main.go:141] libmachine: (auto-395471) Reserved static IP address: 192.168.50.3
	I0717 20:18:16.267690 1107949 main.go:141] libmachine: (auto-395471) Waiting for SSH to be available...
	I0717 20:18:16.270928 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.271505 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:16.271555 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.271702 1107949 main.go:141] libmachine: (auto-395471) DBG | Using SSH client type: external
	I0717 20:18:16.271733 1107949 main.go:141] libmachine: (auto-395471) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471/id_rsa (-rw-------)
	I0717 20:18:16.271775 1107949 main.go:141] libmachine: (auto-395471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 20:18:16.271787 1107949 main.go:141] libmachine: (auto-395471) DBG | About to run SSH command:
	I0717 20:18:16.271799 1107949 main.go:141] libmachine: (auto-395471) DBG | exit 0
	I0717 20:18:16.366054 1107949 main.go:141] libmachine: (auto-395471) DBG | SSH cmd err, output: <nil>: 
	I0717 20:18:16.366355 1107949 main.go:141] libmachine: (auto-395471) KVM machine creation complete!
	I0717 20:18:16.366704 1107949 main.go:141] libmachine: (auto-395471) Calling .GetConfigRaw
	I0717 20:18:16.367333 1107949 main.go:141] libmachine: (auto-395471) Calling .DriverName
	I0717 20:18:16.367561 1107949 main.go:141] libmachine: (auto-395471) Calling .DriverName
	I0717 20:18:16.367788 1107949 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 20:18:16.367834 1107949 main.go:141] libmachine: (auto-395471) Calling .GetState
	I0717 20:18:16.369533 1107949 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 20:18:16.369585 1107949 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 20:18:16.369595 1107949 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 20:18:16.369604 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHHostname
	I0717 20:18:16.372701 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.373271 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:16.373302 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.373475 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHPort
	I0717 20:18:16.373721 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:16.373913 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:16.374094 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHUsername
	I0717 20:18:16.374272 1107949 main.go:141] libmachine: Using SSH client type: native
	I0717 20:18:16.374991 1107949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0717 20:18:16.375018 1107949 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 20:18:16.493359 1107949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:18:16.493389 1107949 main.go:141] libmachine: Detecting the provisioner...
	I0717 20:18:16.493398 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHHostname
	I0717 20:18:16.496769 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.497235 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:16.497285 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.497599 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHPort
	I0717 20:18:16.497864 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:16.498110 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:16.498311 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHUsername
	I0717 20:18:16.498526 1107949 main.go:141] libmachine: Using SSH client type: native
	I0717 20:18:16.499160 1107949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0717 20:18:16.499182 1107949 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 20:18:16.616101 1107949 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0717 20:18:16.616278 1107949 main.go:141] libmachine: found compatible host: buildroot
	I0717 20:18:16.616309 1107949 main.go:141] libmachine: Provisioning with buildroot...
	I0717 20:18:16.616327 1107949 main.go:141] libmachine: (auto-395471) Calling .GetMachineName
	I0717 20:18:16.616655 1107949 buildroot.go:166] provisioning hostname "auto-395471"
	I0717 20:18:16.616689 1107949 main.go:141] libmachine: (auto-395471) Calling .GetMachineName
	I0717 20:18:16.616992 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHHostname
	I0717 20:18:16.620638 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.621107 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:16.621152 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.621300 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHPort
	I0717 20:18:16.621541 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:16.621736 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:16.621867 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHUsername
	I0717 20:18:16.622085 1107949 main.go:141] libmachine: Using SSH client type: native
	I0717 20:18:16.622528 1107949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0717 20:18:16.622552 1107949 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-395471 && echo "auto-395471" | sudo tee /etc/hostname
	I0717 20:18:16.756999 1107949 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-395471
	
	I0717 20:18:16.757040 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHHostname
	I0717 20:18:16.760132 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.760504 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:16.760540 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.760708 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHPort
	I0717 20:18:16.760951 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:16.761186 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:16.761387 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHUsername
	I0717 20:18:16.761592 1107949 main.go:141] libmachine: Using SSH client type: native
	I0717 20:18:16.762212 1107949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0717 20:18:16.762237 1107949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-395471' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-395471/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-395471' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 20:18:16.892189 1107949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:18:16.892249 1107949 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 20:18:16.892285 1107949 buildroot.go:174] setting up certificates
	I0717 20:18:16.892299 1107949 provision.go:83] configureAuth start
	I0717 20:18:16.892311 1107949 main.go:141] libmachine: (auto-395471) Calling .GetMachineName
	I0717 20:18:16.892649 1107949 main.go:141] libmachine: (auto-395471) Calling .GetIP
	I0717 20:18:16.895889 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.896359 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:16.896404 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.896738 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHHostname
	I0717 20:18:16.899745 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.900246 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:16.900281 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:16.900507 1107949 provision.go:138] copyHostCerts
	I0717 20:18:16.900582 1107949 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 20:18:16.900596 1107949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 20:18:16.900690 1107949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 20:18:16.900867 1107949 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 20:18:16.900888 1107949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 20:18:16.900929 1107949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 20:18:16.901003 1107949 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 20:18:16.901015 1107949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 20:18:16.901043 1107949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 20:18:16.901115 1107949 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.auto-395471 san=[192.168.50.3 192.168.50.3 localhost 127.0.0.1 minikube auto-395471]
	I0717 20:18:17.164293 1107949 provision.go:172] copyRemoteCerts
	I0717 20:18:17.164378 1107949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 20:18:17.164411 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHHostname
	I0717 20:18:17.167452 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:17.167914 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:17.167947 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:17.168126 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHPort
	I0717 20:18:17.168357 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:17.168534 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHUsername
	I0717 20:18:17.168661 1107949 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471/id_rsa Username:docker}
	I0717 20:18:17.268849 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 20:18:17.295700 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 20:18:17.324964 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0717 20:18:17.351323 1107949 provision.go:86] duration metric: configureAuth took 459.005707ms
	I0717 20:18:17.351359 1107949 buildroot.go:189] setting minikube options for container-runtime
	I0717 20:18:17.351594 1107949 config.go:182] Loaded profile config "auto-395471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:18:17.351693 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHHostname
	I0717 20:18:17.355157 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:17.355656 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:17.355696 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:17.355876 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHPort
	I0717 20:18:17.356174 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:17.356368 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:17.356566 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHUsername
	I0717 20:18:17.356823 1107949 main.go:141] libmachine: Using SSH client type: native
	I0717 20:18:17.357481 1107949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0717 20:18:17.357513 1107949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 20:18:17.720717 1107949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 20:18:17.720761 1107949 main.go:141] libmachine: Checking connection to Docker...
	I0717 20:18:17.720771 1107949 main.go:141] libmachine: (auto-395471) Calling .GetURL
	I0717 20:18:17.724210 1107949 main.go:141] libmachine: (auto-395471) DBG | Using libvirt version 6000000
	I0717 20:18:17.729894 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:17.886548 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:17.886595 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:17.886766 1107949 main.go:141] libmachine: Docker is up and running!
	I0717 20:18:17.886788 1107949 main.go:141] libmachine: Reticulating splines...
	I0717 20:18:17.886798 1107949 client.go:171] LocalClient.Create took 23.479495843s
	I0717 20:18:17.886827 1107949 start.go:167] duration metric: libmachine.API.Create for "auto-395471" took 23.479581859s
	I0717 20:18:17.886839 1107949 start.go:300] post-start starting for "auto-395471" (driver="kvm2")
	I0717 20:18:17.886885 1107949 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 20:18:17.886930 1107949 main.go:141] libmachine: (auto-395471) Calling .DriverName
	I0717 20:18:17.887266 1107949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 20:18:17.887305 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHHostname
	I0717 20:18:18.446528 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:18.446957 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:18.446995 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:18.447171 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHPort
	I0717 20:18:18.447427 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:18.447645 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHUsername
	I0717 20:18:18.447806 1107949 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471/id_rsa Username:docker}
	I0717 20:18:18.536008 1107949 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 20:18:18.541004 1107949 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 20:18:18.541037 1107949 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 20:18:18.541118 1107949 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 20:18:18.541212 1107949 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 20:18:18.541333 1107949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 20:18:18.551526 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 20:18:18.581842 1107949 start.go:303] post-start completed in 694.983804ms
	I0717 20:18:18.581899 1107949 main.go:141] libmachine: (auto-395471) Calling .GetConfigRaw
	I0717 20:18:18.582701 1107949 main.go:141] libmachine: (auto-395471) Calling .GetIP
	I0717 20:18:18.585761 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:18.586245 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:18.586288 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:18.586583 1107949 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/config.json ...
	I0717 20:18:18.586799 1107949 start.go:128] duration metric: createHost completed in 24.201492098s
	I0717 20:18:18.586834 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHHostname
	I0717 20:18:18.589877 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:18.590379 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:18.590428 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:18.590620 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHPort
	I0717 20:18:18.590870 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:18.591092 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:18.591268 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHUsername
	I0717 20:18:18.591481 1107949 main.go:141] libmachine: Using SSH client type: native
	I0717 20:18:18.592129 1107949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0717 20:18:18.592147 1107949 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 20:18:18.707160 1107949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689625098.683024077
	
	I0717 20:18:18.707190 1107949 fix.go:206] guest clock: 1689625098.683024077
	I0717 20:18:18.707201 1107949 fix.go:219] Guest: 2023-07-17 20:18:18.683024077 +0000 UTC Remote: 2023-07-17 20:18:18.586811309 +0000 UTC m=+24.327717178 (delta=96.212768ms)
	I0717 20:18:18.707263 1107949 fix.go:190] guest clock delta is within tolerance: 96.212768ms
	I0717 20:18:18.707270 1107949 start.go:83] releasing machines lock for "auto-395471", held for 24.322054085s
	I0717 20:18:18.707300 1107949 main.go:141] libmachine: (auto-395471) Calling .DriverName
	I0717 20:18:18.707627 1107949 main.go:141] libmachine: (auto-395471) Calling .GetIP
	I0717 20:18:18.710727 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:18.711069 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:18.711099 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:18.711349 1107949 main.go:141] libmachine: (auto-395471) Calling .DriverName
	I0717 20:18:18.711906 1107949 main.go:141] libmachine: (auto-395471) Calling .DriverName
	I0717 20:18:18.712109 1107949 main.go:141] libmachine: (auto-395471) Calling .DriverName
	I0717 20:18:18.712191 1107949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 20:18:18.712234 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHHostname
	I0717 20:18:18.712362 1107949 ssh_runner.go:195] Run: cat /version.json
	I0717 20:18:18.712390 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHHostname
	I0717 20:18:18.715556 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:18.715599 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:18.715987 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:18.716025 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:18.716049 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:18.716066 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:18.716206 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHPort
	I0717 20:18:18.716327 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHPort
	I0717 20:18:18.716424 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:18.716539 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:18.716628 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHUsername
	I0717 20:18:18.716695 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHUsername
	I0717 20:18:18.716787 1107949 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471/id_rsa Username:docker}
	I0717 20:18:18.716834 1107949 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471/id_rsa Username:docker}
	W0717 20:18:18.863147 1107949 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 20:18:18.863273 1107949 ssh_runner.go:195] Run: systemctl --version
	I0717 20:18:18.870415 1107949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 20:18:19.048529 1107949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 20:18:19.055580 1107949 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 20:18:19.055661 1107949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 20:18:19.078605 1107949 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 20:18:19.078637 1107949 start.go:469] detecting cgroup driver to use...
	I0717 20:18:19.078792 1107949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 20:18:19.095245 1107949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 20:18:19.110562 1107949 docker.go:196] disabling cri-docker service (if available) ...
	I0717 20:18:19.110628 1107949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 20:18:19.127305 1107949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 20:18:19.143520 1107949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 20:18:19.274519 1107949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 20:18:19.419251 1107949 docker.go:212] disabling docker service ...
	I0717 20:18:19.419360 1107949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 20:18:19.434458 1107949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 20:18:19.447966 1107949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 20:18:19.580590 1107949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 20:18:19.697952 1107949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 20:18:19.716731 1107949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 20:18:19.736321 1107949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 20:18:19.736396 1107949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 20:18:19.748947 1107949 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 20:18:19.749019 1107949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 20:18:19.759780 1107949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 20:18:19.772624 1107949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 20:18:19.785373 1107949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 20:18:19.797328 1107949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 20:18:19.808397 1107949 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 20:18:19.808561 1107949 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 20:18:19.824634 1107949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 20:18:19.835690 1107949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 20:18:19.981639 1107949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 20:18:20.210371 1107949 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 20:18:20.210461 1107949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 20:18:20.216730 1107949 start.go:537] Will wait 60s for crictl version
	I0717 20:18:20.216814 1107949 ssh_runner.go:195] Run: which crictl
	I0717 20:18:20.221119 1107949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:18:20.261625 1107949 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 20:18:20.261754 1107949 ssh_runner.go:195] Run: crio --version
	I0717 20:18:20.320838 1107949 ssh_runner.go:195] Run: crio --version
	I0717 20:18:20.375227 1107949 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 20:18:18.909339 1108527 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 20:18:18.909605 1108527 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:18:18.909663 1108527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:18:18.929474 1108527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0717 20:18:18.930053 1108527 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:18:18.930717 1108527 main.go:141] libmachine: Using API Version  1
	I0717 20:18:18.930746 1108527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:18:18.931145 1108527 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:18:18.931378 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetMachineName
	I0717 20:18:18.931519 1108527 main.go:141] libmachine: (flannel-395471) Calling .DriverName
	I0717 20:18:18.931745 1108527 start.go:159] libmachine.API.Create for "flannel-395471" (driver="kvm2")
	I0717 20:18:18.931774 1108527 client.go:168] LocalClient.Create starting
	I0717 20:18:18.931834 1108527 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem
	I0717 20:18:18.931875 1108527 main.go:141] libmachine: Decoding PEM data...
	I0717 20:18:18.931895 1108527 main.go:141] libmachine: Parsing certificate...
	I0717 20:18:18.931973 1108527 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem
	I0717 20:18:18.931996 1108527 main.go:141] libmachine: Decoding PEM data...
	I0717 20:18:18.932006 1108527 main.go:141] libmachine: Parsing certificate...
	I0717 20:18:18.932032 1108527 main.go:141] libmachine: Running pre-create checks...
	I0717 20:18:18.932048 1108527 main.go:141] libmachine: (flannel-395471) Calling .PreCreateCheck
	I0717 20:18:18.932447 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetConfigRaw
	I0717 20:18:18.932934 1108527 main.go:141] libmachine: Creating machine...
	I0717 20:18:18.932957 1108527 main.go:141] libmachine: (flannel-395471) Calling .Create
	I0717 20:18:18.933119 1108527 main.go:141] libmachine: (flannel-395471) Creating KVM machine...
	I0717 20:18:18.934875 1108527 main.go:141] libmachine: (flannel-395471) DBG | found existing default KVM network
	I0717 20:18:18.936572 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:18.936354 1108550 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fc:80:4b} reservation:<nil>}
	I0717 20:18:18.937926 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:18.937831 1108550 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:63:21:57} reservation:<nil>}
	I0717 20:18:18.939188 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:18.939050 1108550 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002e1110}
	I0717 20:18:18.945467 1108527 main.go:141] libmachine: (flannel-395471) DBG | trying to create private KVM network mk-flannel-395471 192.168.61.0/24...
	I0717 20:18:19.033883 1108527 main.go:141] libmachine: (flannel-395471) DBG | private KVM network mk-flannel-395471 192.168.61.0/24 created
	I0717 20:18:19.033926 1108527 main.go:141] libmachine: (flannel-395471) Setting up store path in /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471 ...
	I0717 20:18:19.033954 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:19.033884 1108550 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 20:18:19.033972 1108527 main.go:141] libmachine: (flannel-395471) Building disk image from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 20:18:19.034009 1108527 main.go:141] libmachine: (flannel-395471) Downloading /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0717 20:18:19.280419 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:19.280247 1108550 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/id_rsa...
	I0717 20:18:19.390547 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:19.390379 1108550 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/flannel-395471.rawdisk...
	I0717 20:18:19.390584 1108527 main.go:141] libmachine: (flannel-395471) DBG | Writing magic tar header
	I0717 20:18:19.390602 1108527 main.go:141] libmachine: (flannel-395471) DBG | Writing SSH key tar header
	I0717 20:18:19.390618 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:19.390505 1108550 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471 ...
	I0717 20:18:19.390636 1108527 main.go:141] libmachine: (flannel-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471
	I0717 20:18:19.390731 1108527 main.go:141] libmachine: (flannel-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines
	I0717 20:18:19.390766 1108527 main.go:141] libmachine: (flannel-395471) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471 (perms=drwx------)
	I0717 20:18:19.390778 1108527 main.go:141] libmachine: (flannel-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 20:18:19.390795 1108527 main.go:141] libmachine: (flannel-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725
	I0717 20:18:19.390809 1108527 main.go:141] libmachine: (flannel-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 20:18:19.390823 1108527 main.go:141] libmachine: (flannel-395471) DBG | Checking permissions on dir: /home/jenkins
	I0717 20:18:19.390833 1108527 main.go:141] libmachine: (flannel-395471) DBG | Checking permissions on dir: /home
	I0717 20:18:19.390848 1108527 main.go:141] libmachine: (flannel-395471) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines (perms=drwxr-xr-x)
	I0717 20:18:19.390864 1108527 main.go:141] libmachine: (flannel-395471) DBG | Skipping /home - not owner
	I0717 20:18:19.390908 1108527 main.go:141] libmachine: (flannel-395471) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube (perms=drwxr-xr-x)
	I0717 20:18:19.390930 1108527 main.go:141] libmachine: (flannel-395471) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725 (perms=drwxrwxr-x)
	I0717 20:18:19.390953 1108527 main.go:141] libmachine: (flannel-395471) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 20:18:19.390968 1108527 main.go:141] libmachine: (flannel-395471) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 20:18:19.390982 1108527 main.go:141] libmachine: (flannel-395471) Creating domain...
	I0717 20:18:19.392145 1108527 main.go:141] libmachine: (flannel-395471) define libvirt domain using xml: 
	I0717 20:18:19.392184 1108527 main.go:141] libmachine: (flannel-395471) <domain type='kvm'>
	I0717 20:18:19.392195 1108527 main.go:141] libmachine: (flannel-395471)   <name>flannel-395471</name>
	I0717 20:18:19.392243 1108527 main.go:141] libmachine: (flannel-395471)   <memory unit='MiB'>3072</memory>
	I0717 20:18:19.392259 1108527 main.go:141] libmachine: (flannel-395471)   <vcpu>2</vcpu>
	I0717 20:18:19.392267 1108527 main.go:141] libmachine: (flannel-395471)   <features>
	I0717 20:18:19.392278 1108527 main.go:141] libmachine: (flannel-395471)     <acpi/>
	I0717 20:18:19.392287 1108527 main.go:141] libmachine: (flannel-395471)     <apic/>
	I0717 20:18:19.392295 1108527 main.go:141] libmachine: (flannel-395471)     <pae/>
	I0717 20:18:19.392307 1108527 main.go:141] libmachine: (flannel-395471)     
	I0717 20:18:19.392317 1108527 main.go:141] libmachine: (flannel-395471)   </features>
	I0717 20:18:19.392331 1108527 main.go:141] libmachine: (flannel-395471)   <cpu mode='host-passthrough'>
	I0717 20:18:19.392343 1108527 main.go:141] libmachine: (flannel-395471)   
	I0717 20:18:19.392358 1108527 main.go:141] libmachine: (flannel-395471)   </cpu>
	I0717 20:18:19.392370 1108527 main.go:141] libmachine: (flannel-395471)   <os>
	I0717 20:18:19.392387 1108527 main.go:141] libmachine: (flannel-395471)     <type>hvm</type>
	I0717 20:18:19.392401 1108527 main.go:141] libmachine: (flannel-395471)     <boot dev='cdrom'/>
	I0717 20:18:19.392413 1108527 main.go:141] libmachine: (flannel-395471)     <boot dev='hd'/>
	I0717 20:18:19.392424 1108527 main.go:141] libmachine: (flannel-395471)     <bootmenu enable='no'/>
	I0717 20:18:19.392436 1108527 main.go:141] libmachine: (flannel-395471)   </os>
	I0717 20:18:19.392447 1108527 main.go:141] libmachine: (flannel-395471)   <devices>
	I0717 20:18:19.392463 1108527 main.go:141] libmachine: (flannel-395471)     <disk type='file' device='cdrom'>
	I0717 20:18:19.392482 1108527 main.go:141] libmachine: (flannel-395471)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/boot2docker.iso'/>
	I0717 20:18:19.392496 1108527 main.go:141] libmachine: (flannel-395471)       <target dev='hdc' bus='scsi'/>
	I0717 20:18:19.392509 1108527 main.go:141] libmachine: (flannel-395471)       <readonly/>
	I0717 20:18:19.392520 1108527 main.go:141] libmachine: (flannel-395471)     </disk>
	I0717 20:18:19.392547 1108527 main.go:141] libmachine: (flannel-395471)     <disk type='file' device='disk'>
	I0717 20:18:19.392574 1108527 main.go:141] libmachine: (flannel-395471)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 20:18:19.392592 1108527 main.go:141] libmachine: (flannel-395471)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/flannel-395471.rawdisk'/>
	I0717 20:18:19.392609 1108527 main.go:141] libmachine: (flannel-395471)       <target dev='hda' bus='virtio'/>
	I0717 20:18:19.392623 1108527 main.go:141] libmachine: (flannel-395471)     </disk>
	I0717 20:18:19.392633 1108527 main.go:141] libmachine: (flannel-395471)     <interface type='network'>
	I0717 20:18:19.392647 1108527 main.go:141] libmachine: (flannel-395471)       <source network='mk-flannel-395471'/>
	I0717 20:18:19.392658 1108527 main.go:141] libmachine: (flannel-395471)       <model type='virtio'/>
	I0717 20:18:19.392671 1108527 main.go:141] libmachine: (flannel-395471)     </interface>
	I0717 20:18:19.392684 1108527 main.go:141] libmachine: (flannel-395471)     <interface type='network'>
	I0717 20:18:19.392702 1108527 main.go:141] libmachine: (flannel-395471)       <source network='default'/>
	I0717 20:18:19.392716 1108527 main.go:141] libmachine: (flannel-395471)       <model type='virtio'/>
	I0717 20:18:19.392730 1108527 main.go:141] libmachine: (flannel-395471)     </interface>
	I0717 20:18:19.392754 1108527 main.go:141] libmachine: (flannel-395471)     <serial type='pty'>
	I0717 20:18:19.392773 1108527 main.go:141] libmachine: (flannel-395471)       <target port='0'/>
	I0717 20:18:19.392792 1108527 main.go:141] libmachine: (flannel-395471)     </serial>
	I0717 20:18:19.392804 1108527 main.go:141] libmachine: (flannel-395471)     <console type='pty'>
	I0717 20:18:19.392818 1108527 main.go:141] libmachine: (flannel-395471)       <target type='serial' port='0'/>
	I0717 20:18:19.392830 1108527 main.go:141] libmachine: (flannel-395471)     </console>
	I0717 20:18:19.392841 1108527 main.go:141] libmachine: (flannel-395471)     <rng model='virtio'>
	I0717 20:18:19.392858 1108527 main.go:141] libmachine: (flannel-395471)       <backend model='random'>/dev/random</backend>
	I0717 20:18:19.392872 1108527 main.go:141] libmachine: (flannel-395471)     </rng>
	I0717 20:18:19.392883 1108527 main.go:141] libmachine: (flannel-395471)     
	I0717 20:18:19.392895 1108527 main.go:141] libmachine: (flannel-395471)     
	I0717 20:18:19.392906 1108527 main.go:141] libmachine: (flannel-395471)   </devices>
	I0717 20:18:19.392920 1108527 main.go:141] libmachine: (flannel-395471) </domain>
	I0717 20:18:19.392936 1108527 main.go:141] libmachine: (flannel-395471) 
	I0717 20:18:19.398376 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:8b:b5:f5 in network default
	I0717 20:18:19.399251 1108527 main.go:141] libmachine: (flannel-395471) Ensuring networks are active...
	I0717 20:18:19.399280 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:19.400184 1108527 main.go:141] libmachine: (flannel-395471) Ensuring network default is active
	I0717 20:18:19.400591 1108527 main.go:141] libmachine: (flannel-395471) Ensuring network mk-flannel-395471 is active
	I0717 20:18:19.401260 1108527 main.go:141] libmachine: (flannel-395471) Getting domain xml...
	I0717 20:18:19.402121 1108527 main.go:141] libmachine: (flannel-395471) Creating domain...
	I0717 20:18:20.884583 1108527 main.go:141] libmachine: (flannel-395471) Waiting to get IP...
	I0717 20:18:20.885742 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:20.886336 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:20.886365 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:20.886333 1108550 retry.go:31] will retry after 279.340574ms: waiting for machine to come up
	I0717 20:18:21.168445 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:21.169111 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:21.169142 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:21.169056 1108550 retry.go:31] will retry after 262.634705ms: waiting for machine to come up
	I0717 20:18:21.433837 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:21.434528 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:21.434555 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:21.434422 1108550 retry.go:31] will retry after 300.608225ms: waiting for machine to come up
	I0717 20:18:21.737158 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:21.737754 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:21.737790 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:21.737699 1108550 retry.go:31] will retry after 530.169974ms: waiting for machine to come up
	I0717 20:18:22.270378 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:22.271386 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:22.271424 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:22.271272 1108550 retry.go:31] will retry after 548.289803ms: waiting for machine to come up
	I0717 20:18:22.821015 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:22.821544 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:22.821590 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:22.821479 1108550 retry.go:31] will retry after 854.595236ms: waiting for machine to come up
	I0717 20:18:23.677605 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:23.678222 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:23.678249 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:23.678150 1108550 retry.go:31] will retry after 929.493991ms: waiting for machine to come up
	I0717 20:18:20.377054 1107949 main.go:141] libmachine: (auto-395471) Calling .GetIP
	I0717 20:18:20.380795 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:20.381265 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:20.381306 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:20.381536 1107949 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 20:18:20.386526 1107949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:18:20.400290 1107949 localpath.go:92] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/client.crt -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/client.crt
	I0717 20:18:20.400468 1107949 localpath.go:117] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/client.key -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/client.key
	I0717 20:18:20.400661 1107949 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 20:18:20.400717 1107949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:18:20.435231 1107949 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 20:18:20.435314 1107949 ssh_runner.go:195] Run: which lz4
	I0717 20:18:20.439846 1107949 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 20:18:20.444617 1107949 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 20:18:20.444655 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 20:18:22.344701 1107949 crio.go:444] Took 1.904887 seconds to copy over tarball
	I0717 20:18:22.344785 1107949 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 20:18:24.609118 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:24.609652 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:24.609689 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:24.609606 1108550 retry.go:31] will retry after 1.401869163s: waiting for machine to come up
	I0717 20:18:26.013373 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:26.013943 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:26.013978 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:26.013874 1108550 retry.go:31] will retry after 1.168123026s: waiting for machine to come up
	I0717 20:18:27.183706 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:27.184229 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:27.184255 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:27.184180 1108550 retry.go:31] will retry after 1.873580668s: waiting for machine to come up
	I0717 20:18:25.705205 1107949 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.360383405s)
	I0717 20:18:25.705241 1107949 crio.go:451] Took 3.360505 seconds to extract the tarball
	I0717 20:18:25.705252 1107949 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 20:18:25.750853 1107949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:18:25.820701 1107949 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 20:18:25.820725 1107949 cache_images.go:84] Images are preloaded, skipping loading
	I0717 20:18:25.820802 1107949 ssh_runner.go:195] Run: crio config
	I0717 20:18:25.886695 1107949 cni.go:84] Creating CNI manager for ""
	I0717 20:18:25.886748 1107949 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 20:18:25.886772 1107949 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 20:18:25.886824 1107949 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.3 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-395471 NodeName:auto-395471 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 20:18:25.887005 1107949 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-395471"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 20:18:25.887088 1107949 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=auto-395471 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:auto-395471 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 20:18:25.887147 1107949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 20:18:25.899462 1107949 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 20:18:25.899573 1107949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 20:18:25.912126 1107949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I0717 20:18:25.932913 1107949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 20:18:25.954468 1107949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0717 20:18:25.976252 1107949 ssh_runner.go:195] Run: grep 192.168.50.3	control-plane.minikube.internal$ /etc/hosts
	I0717 20:18:25.982245 1107949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:18:25.997870 1107949 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471 for IP: 192.168.50.3
	I0717 20:18:25.997910 1107949 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:18:25.998121 1107949 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 20:18:25.998186 1107949 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 20:18:25.998416 1107949 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/client.key
	I0717 20:18:25.998450 1107949 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/apiserver.key.56075cf4
	I0717 20:18:25.998464 1107949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/apiserver.crt.56075cf4 with IP's: [192.168.50.3 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 20:18:26.250129 1107949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/apiserver.crt.56075cf4 ...
	I0717 20:18:26.250163 1107949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/apiserver.crt.56075cf4: {Name:mk9eae3d59c5ff189f1a94595eac4a772513fd03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:18:26.286084 1107949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/apiserver.key.56075cf4 ...
	I0717 20:18:26.286124 1107949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/apiserver.key.56075cf4: {Name:mk0de4caa65d3bf64473a8a0f1e1326eb93c7f85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:18:26.286294 1107949 certs.go:337] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/apiserver.crt.56075cf4 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/apiserver.crt
	I0717 20:18:26.286396 1107949 certs.go:341] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/apiserver.key.56075cf4 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/apiserver.key
	I0717 20:18:26.286484 1107949 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/proxy-client.key
	I0717 20:18:26.286507 1107949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/proxy-client.crt with IP's: []
	I0717 20:18:26.501572 1107949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/proxy-client.crt ...
	I0717 20:18:26.501611 1107949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/proxy-client.crt: {Name:mk11c9c81a5812d93a851e3b34db5abe882ee81b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:18:26.553729 1107949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/proxy-client.key ...
	I0717 20:18:26.553771 1107949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/proxy-client.key: {Name:mkb53061e9a7cbf0b9ec5c1985379a90a553224a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:18:26.554201 1107949 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 20:18:26.554264 1107949 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 20:18:26.554280 1107949 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 20:18:26.554316 1107949 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 20:18:26.554348 1107949 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 20:18:26.554384 1107949 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 20:18:26.554440 1107949 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 20:18:26.555234 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 20:18:26.588978 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 20:18:26.618486 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 20:18:26.645598 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/auto-395471/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 20:18:26.674090 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 20:18:26.701977 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 20:18:26.735078 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 20:18:26.762785 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 20:18:26.789458 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 20:18:26.815444 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 20:18:26.844835 1107949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 20:18:26.872176 1107949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 20:18:26.892440 1107949 ssh_runner.go:195] Run: openssl version
	I0717 20:18:26.899180 1107949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 20:18:26.911763 1107949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 20:18:26.917517 1107949 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 20:18:26.917606 1107949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 20:18:26.924940 1107949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 20:18:26.937761 1107949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 20:18:26.951512 1107949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 20:18:26.957714 1107949 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 20:18:26.957784 1107949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 20:18:26.964720 1107949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 20:18:26.977717 1107949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 20:18:26.993669 1107949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:18:27.000729 1107949 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:18:27.000802 1107949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:18:27.009597 1107949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 20:18:27.021442 1107949 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 20:18:27.026562 1107949 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 20:18:27.026628 1107949 kubeadm.go:404] StartCluster: {Name:auto-395471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-395471 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.3 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:18:27.026736 1107949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 20:18:27.026813 1107949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 20:18:27.066871 1107949 cri.go:89] found id: ""
	I0717 20:18:27.066957 1107949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 20:18:27.079428 1107949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:18:27.091423 1107949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:18:27.105233 1107949 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:18:27.105289 1107949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 20:18:27.329747 1107949 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 20:18:29.059660 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:29.060219 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:29.060255 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:29.060156 1108550 retry.go:31] will retry after 2.424875244s: waiting for machine to come up
	I0717 20:18:31.487140 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:31.487529 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:31.487582 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:31.487510 1108550 retry.go:31] will retry after 2.776819494s: waiting for machine to come up
	I0717 20:18:34.266129 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:34.266710 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:34.266764 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:34.266691 1108550 retry.go:31] will retry after 3.929992803s: waiting for machine to come up
	I0717 20:18:38.198965 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:38.199515 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find current IP address of domain flannel-395471 in network mk-flannel-395471
	I0717 20:18:38.199551 1108527 main.go:141] libmachine: (flannel-395471) DBG | I0717 20:18:38.199451 1108550 retry.go:31] will retry after 5.329437741s: waiting for machine to come up
	I0717 20:18:40.692222 1107949 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 20:18:40.692270 1107949 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:18:40.692347 1107949 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:18:40.692444 1107949 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:18:40.692586 1107949 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:18:40.692698 1107949 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:18:40.694827 1107949 out.go:204]   - Generating certificates and keys ...
	I0717 20:18:40.694922 1107949 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:18:40.695002 1107949 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:18:40.695082 1107949 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 20:18:40.695175 1107949 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 20:18:40.695263 1107949 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 20:18:40.695328 1107949 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 20:18:40.695397 1107949 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 20:18:40.695571 1107949 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-395471 localhost] and IPs [192.168.50.3 127.0.0.1 ::1]
	I0717 20:18:40.695640 1107949 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 20:18:40.695798 1107949 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-395471 localhost] and IPs [192.168.50.3 127.0.0.1 ::1]
	I0717 20:18:40.695907 1107949 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 20:18:40.696002 1107949 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 20:18:40.696058 1107949 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 20:18:40.696142 1107949 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:18:40.696226 1107949 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:18:40.696296 1107949 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:18:40.696385 1107949 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:18:40.696471 1107949 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:18:40.696609 1107949 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:18:40.696741 1107949 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:18:40.696778 1107949 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 20:18:40.696831 1107949 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:18:40.698817 1107949 out.go:204]   - Booting up control plane ...
	I0717 20:18:40.698944 1107949 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:18:40.699038 1107949 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:18:40.699143 1107949 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:18:40.699251 1107949 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:18:40.699459 1107949 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 20:18:40.699568 1107949 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.005730 seconds
	I0717 20:18:40.699718 1107949 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 20:18:40.699892 1107949 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 20:18:40.699979 1107949 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 20:18:40.700147 1107949 kubeadm.go:322] [mark-control-plane] Marking the node auto-395471 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 20:18:40.700201 1107949 kubeadm.go:322] [bootstrap-token] Using token: 1q5ffh.byh0kmqdasvwpd68
	I0717 20:18:40.702280 1107949 out.go:204]   - Configuring RBAC rules ...
	I0717 20:18:40.702437 1107949 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 20:18:40.702552 1107949 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 20:18:40.702742 1107949 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 20:18:40.702907 1107949 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 20:18:40.703034 1107949 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 20:18:40.703179 1107949 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 20:18:40.703323 1107949 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 20:18:40.703381 1107949 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 20:18:40.703430 1107949 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 20:18:40.703439 1107949 kubeadm.go:322] 
	I0717 20:18:40.703521 1107949 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 20:18:40.703531 1107949 kubeadm.go:322] 
	I0717 20:18:40.703631 1107949 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 20:18:40.703657 1107949 kubeadm.go:322] 
	I0717 20:18:40.703696 1107949 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 20:18:40.703771 1107949 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 20:18:40.703848 1107949 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 20:18:40.703861 1107949 kubeadm.go:322] 
	I0717 20:18:40.703922 1107949 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 20:18:40.703934 1107949 kubeadm.go:322] 
	I0717 20:18:40.704047 1107949 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 20:18:40.704068 1107949 kubeadm.go:322] 
	I0717 20:18:40.704125 1107949 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 20:18:40.704210 1107949 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 20:18:40.704308 1107949 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 20:18:40.704329 1107949 kubeadm.go:322] 
	I0717 20:18:40.704423 1107949 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 20:18:40.704510 1107949 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 20:18:40.704519 1107949 kubeadm.go:322] 
	I0717 20:18:40.704642 1107949 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1q5ffh.byh0kmqdasvwpd68 \
	I0717 20:18:40.704773 1107949 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 20:18:40.704814 1107949 kubeadm.go:322] 	--control-plane 
	I0717 20:18:40.704826 1107949 kubeadm.go:322] 
	I0717 20:18:40.704940 1107949 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 20:18:40.704957 1107949 kubeadm.go:322] 
	I0717 20:18:40.705072 1107949 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1q5ffh.byh0kmqdasvwpd68 \
	I0717 20:18:40.705232 1107949 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 20:18:40.705251 1107949 cni.go:84] Creating CNI manager for ""
	I0717 20:18:40.705266 1107949 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 20:18:40.707677 1107949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 20:18:43.533741 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:43.534291 1108527 main.go:141] libmachine: (flannel-395471) Found IP for machine: 192.168.61.151
	I0717 20:18:43.534323 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has current primary IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:43.534335 1108527 main.go:141] libmachine: (flannel-395471) Reserving static IP address...
	I0717 20:18:43.534732 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find host DHCP lease matching {name: "flannel-395471", mac: "52:54:00:2d:0a:fd", ip: "192.168.61.151"} in network mk-flannel-395471
	I0717 20:18:43.628939 1108527 main.go:141] libmachine: (flannel-395471) DBG | Getting to WaitForSSH function...
	I0717 20:18:43.628978 1108527 main.go:141] libmachine: (flannel-395471) Reserved static IP address: 192.168.61.151
	I0717 20:18:43.628993 1108527 main.go:141] libmachine: (flannel-395471) Waiting for SSH to be available...
	I0717 20:18:43.632498 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:43.632852 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471
	I0717 20:18:43.632885 1108527 main.go:141] libmachine: (flannel-395471) DBG | unable to find defined IP address of network mk-flannel-395471 interface with MAC address 52:54:00:2d:0a:fd
	I0717 20:18:43.633071 1108527 main.go:141] libmachine: (flannel-395471) DBG | Using SSH client type: external
	I0717 20:18:43.633099 1108527 main.go:141] libmachine: (flannel-395471) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/id_rsa (-rw-------)
	I0717 20:18:43.633132 1108527 main.go:141] libmachine: (flannel-395471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 20:18:43.633162 1108527 main.go:141] libmachine: (flannel-395471) DBG | About to run SSH command:
	I0717 20:18:43.633174 1108527 main.go:141] libmachine: (flannel-395471) DBG | exit 0
	I0717 20:18:43.637308 1108527 main.go:141] libmachine: (flannel-395471) DBG | SSH cmd err, output: exit status 255: 
	I0717 20:18:43.637354 1108527 main.go:141] libmachine: (flannel-395471) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 20:18:43.637365 1108527 main.go:141] libmachine: (flannel-395471) DBG | command : exit 0
	I0717 20:18:43.637376 1108527 main.go:141] libmachine: (flannel-395471) DBG | err     : exit status 255
	I0717 20:18:43.637444 1108527 main.go:141] libmachine: (flannel-395471) DBG | output  : 
	I0717 20:18:40.709661 1107949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 20:18:40.747652 1107949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 20:18:40.830523 1107949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 20:18:40.830640 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:40.830647 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=auto-395471 minikube.k8s.io/updated_at=2023_07_17T20_18_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:40.854130 1107949 ops.go:34] apiserver oom_adj: -16
	I0717 20:18:41.136438 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:41.734842 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:42.234924 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:42.734852 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:43.234938 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:43.734452 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:44.234970 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:46.637842 1108527 main.go:141] libmachine: (flannel-395471) DBG | Getting to WaitForSSH function...
	I0717 20:18:46.640622 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:46.641041 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:46.641076 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:46.641175 1108527 main.go:141] libmachine: (flannel-395471) DBG | Using SSH client type: external
	I0717 20:18:46.641215 1108527 main.go:141] libmachine: (flannel-395471) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/id_rsa (-rw-------)
	I0717 20:18:46.641280 1108527 main.go:141] libmachine: (flannel-395471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 20:18:46.641313 1108527 main.go:141] libmachine: (flannel-395471) DBG | About to run SSH command:
	I0717 20:18:46.641356 1108527 main.go:141] libmachine: (flannel-395471) DBG | exit 0
	I0717 20:18:46.734678 1108527 main.go:141] libmachine: (flannel-395471) DBG | SSH cmd err, output: <nil>: 
	I0717 20:18:46.735034 1108527 main.go:141] libmachine: (flannel-395471) KVM machine creation complete!
	I0717 20:18:46.735367 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetConfigRaw
	I0717 20:18:46.735996 1108527 main.go:141] libmachine: (flannel-395471) Calling .DriverName
	I0717 20:18:46.736246 1108527 main.go:141] libmachine: (flannel-395471) Calling .DriverName
	I0717 20:18:46.736481 1108527 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 20:18:46.736502 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetState
	I0717 20:18:46.738054 1108527 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 20:18:46.738074 1108527 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 20:18:46.738081 1108527 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 20:18:46.738087 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHHostname
	I0717 20:18:46.741126 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:46.741519 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:46.741579 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:46.741826 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHPort
	I0717 20:18:46.742101 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:46.742316 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:46.742506 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHUsername
	I0717 20:18:46.742728 1108527 main.go:141] libmachine: Using SSH client type: native
	I0717 20:18:46.743189 1108527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0717 20:18:46.743213 1108527 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 20:18:46.861341 1108527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:18:46.861371 1108527 main.go:141] libmachine: Detecting the provisioner...
	I0717 20:18:46.861384 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHHostname
	I0717 20:18:46.865627 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:46.866035 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:46.866097 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:46.866304 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHPort
	I0717 20:18:46.866538 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:46.866737 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:46.866871 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHUsername
	I0717 20:18:46.867051 1108527 main.go:141] libmachine: Using SSH client type: native
	I0717 20:18:46.867577 1108527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0717 20:18:46.867592 1108527 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 20:18:46.990703 1108527 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0717 20:18:46.990773 1108527 main.go:141] libmachine: found compatible host: buildroot
	I0717 20:18:46.990784 1108527 main.go:141] libmachine: Provisioning with buildroot...
	I0717 20:18:46.990796 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetMachineName
	I0717 20:18:46.991167 1108527 buildroot.go:166] provisioning hostname "flannel-395471"
	I0717 20:18:46.991205 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetMachineName
	I0717 20:18:46.991411 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHHostname
	I0717 20:18:46.994719 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:46.995177 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:46.995218 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:46.995480 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHPort
	I0717 20:18:46.995730 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:46.995956 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:46.996146 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHUsername
	I0717 20:18:46.996360 1108527 main.go:141] libmachine: Using SSH client type: native
	I0717 20:18:46.997013 1108527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0717 20:18:46.997033 1108527 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-395471 && echo "flannel-395471" | sudo tee /etc/hostname
	I0717 20:18:47.133549 1108527 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-395471
	
	I0717 20:18:47.133607 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHHostname
	I0717 20:18:47.137163 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.137759 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:47.137814 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.138182 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHPort
	I0717 20:18:47.138454 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:47.138734 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:47.138930 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHUsername
	I0717 20:18:47.139148 1108527 main.go:141] libmachine: Using SSH client type: native
	I0717 20:18:47.139761 1108527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0717 20:18:47.139790 1108527 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-395471' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-395471/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-395471' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 20:18:47.275543 1108527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:18:47.275583 1108527 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 20:18:47.275638 1108527 buildroot.go:174] setting up certificates
	I0717 20:18:47.275654 1108527 provision.go:83] configureAuth start
	I0717 20:18:47.275673 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetMachineName
	I0717 20:18:47.275981 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetIP
	I0717 20:18:47.279192 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.279581 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:47.279617 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.279882 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHHostname
	I0717 20:18:47.282825 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.283257 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:47.283294 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.283427 1108527 provision.go:138] copyHostCerts
	I0717 20:18:47.283515 1108527 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 20:18:47.283531 1108527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 20:18:47.283696 1108527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 20:18:47.283864 1108527 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 20:18:47.283877 1108527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 20:18:47.283905 1108527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 20:18:47.283987 1108527 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 20:18:47.283995 1108527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 20:18:47.284020 1108527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 20:18:47.284093 1108527 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.flannel-395471 san=[192.168.61.151 192.168.61.151 localhost 127.0.0.1 minikube flannel-395471]
	I0717 20:18:47.353952 1108527 provision.go:172] copyRemoteCerts
	I0717 20:18:47.354029 1108527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 20:18:47.354067 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHHostname
	I0717 20:18:47.357176 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.357788 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:47.357826 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.358090 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHPort
	I0717 20:18:47.358336 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:47.358506 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHUsername
	I0717 20:18:47.358631 1108527 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/id_rsa Username:docker}
	I0717 20:18:47.458983 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 20:18:47.484052 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 20:18:47.510890 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 20:18:47.538664 1108527 provision.go:86] duration metric: configureAuth took 262.990089ms
	I0717 20:18:47.538717 1108527 buildroot.go:189] setting minikube options for container-runtime
	I0717 20:18:47.538948 1108527 config.go:182] Loaded profile config "flannel-395471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:18:47.539089 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHHostname
	I0717 20:18:47.542093 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.542513 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:47.542543 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.542735 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHPort
	I0717 20:18:47.542950 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:47.543109 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:47.543237 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHUsername
	I0717 20:18:47.543378 1108527 main.go:141] libmachine: Using SSH client type: native
	I0717 20:18:47.543792 1108527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0717 20:18:47.543808 1108527 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 20:18:47.892535 1108527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 20:18:47.892573 1108527 main.go:141] libmachine: Checking connection to Docker...
	I0717 20:18:47.892585 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetURL
	I0717 20:18:47.893917 1108527 main.go:141] libmachine: (flannel-395471) DBG | Using libvirt version 6000000
	I0717 20:18:47.896649 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.897105 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:47.897147 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.897313 1108527 main.go:141] libmachine: Docker is up and running!
	I0717 20:18:47.897336 1108527 main.go:141] libmachine: Reticulating splines...
	I0717 20:18:47.897346 1108527 client.go:171] LocalClient.Create took 28.965562513s
	I0717 20:18:47.897380 1108527 start.go:167] duration metric: libmachine.API.Create for "flannel-395471" took 28.96563554s
	I0717 20:18:47.897401 1108527 start.go:300] post-start starting for "flannel-395471" (driver="kvm2")
	I0717 20:18:47.897415 1108527 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 20:18:47.897437 1108527 main.go:141] libmachine: (flannel-395471) Calling .DriverName
	I0717 20:18:47.897766 1108527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 20:18:47.897803 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHHostname
	I0717 20:18:47.900317 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.900693 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:47.900724 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:47.900923 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHPort
	I0717 20:18:47.901139 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:47.901343 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHUsername
	I0717 20:18:47.901591 1108527 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/id_rsa Username:docker}
	I0717 20:18:47.996817 1108527 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 20:18:48.001953 1108527 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 20:18:48.001987 1108527 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 20:18:48.002064 1108527 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 20:18:48.002198 1108527 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 20:18:48.002316 1108527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 20:18:48.012019 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 20:18:48.038527 1108527 start.go:303] post-start completed in 141.105481ms
	I0717 20:18:48.038597 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetConfigRaw
	I0717 20:18:48.039452 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetIP
	I0717 20:18:48.042405 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:48.042786 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:48.042823 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:48.043118 1108527 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/config.json ...
	I0717 20:18:48.043371 1108527 start.go:128] duration metric: createHost completed in 29.136513272s
	I0717 20:18:48.043400 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHHostname
	I0717 20:18:48.046475 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:48.046877 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:48.046911 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:48.047087 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHPort
	I0717 20:18:48.047342 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:48.047535 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:48.047733 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHUsername
	I0717 20:18:48.047949 1108527 main.go:141] libmachine: Using SSH client type: native
	I0717 20:18:48.048587 1108527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I0717 20:18:48.048611 1108527 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 20:18:48.175031 1108527 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689625128.159881775
	
	I0717 20:18:48.175052 1108527 fix.go:206] guest clock: 1689625128.159881775
	I0717 20:18:48.175060 1108527 fix.go:219] Guest: 2023-07-17 20:18:48.159881775 +0000 UTC Remote: 2023-07-17 20:18:48.043387672 +0000 UTC m=+29.271684653 (delta=116.494103ms)
	I0717 20:18:48.175080 1108527 fix.go:190] guest clock delta is within tolerance: 116.494103ms
	I0717 20:18:48.175085 1108527 start.go:83] releasing machines lock for "flannel-395471", held for 29.268356496s
	I0717 20:18:48.175106 1108527 main.go:141] libmachine: (flannel-395471) Calling .DriverName
	I0717 20:18:48.175410 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetIP
	I0717 20:18:48.179050 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:48.179565 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:48.179594 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:48.179820 1108527 main.go:141] libmachine: (flannel-395471) Calling .DriverName
	I0717 20:18:48.180628 1108527 main.go:141] libmachine: (flannel-395471) Calling .DriverName
	I0717 20:18:48.180869 1108527 main.go:141] libmachine: (flannel-395471) Calling .DriverName
	I0717 20:18:48.180980 1108527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 20:18:48.181045 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHHostname
	I0717 20:18:48.181103 1108527 ssh_runner.go:195] Run: cat /version.json
	I0717 20:18:48.181135 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHHostname
	I0717 20:18:48.184229 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:48.184264 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:48.184646 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:48.184702 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:48.184726 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:48.184740 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:48.184863 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHPort
	I0717 20:18:48.185024 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHPort
	I0717 20:18:48.185131 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:48.185290 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHUsername
	I0717 20:18:48.185303 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:18:48.185495 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHUsername
	I0717 20:18:48.185505 1108527 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/id_rsa Username:docker}
	I0717 20:18:48.185697 1108527 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/id_rsa Username:docker}
	W0717 20:18:48.293061 1108527 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 20:18:48.293182 1108527 ssh_runner.go:195] Run: systemctl --version
	I0717 20:18:48.300847 1108527 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 20:18:48.474929 1108527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 20:18:48.481272 1108527 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 20:18:48.481363 1108527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 20:18:48.497667 1108527 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 20:18:48.497698 1108527 start.go:469] detecting cgroup driver to use...
	I0717 20:18:48.497808 1108527 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 20:18:48.511715 1108527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 20:18:48.524959 1108527 docker.go:196] disabling cri-docker service (if available) ...
	I0717 20:18:48.525054 1108527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 20:18:48.540003 1108527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 20:18:48.553935 1108527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 20:18:48.663803 1108527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 20:18:48.796498 1108527 docker.go:212] disabling docker service ...
	I0717 20:18:48.796591 1108527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 20:18:44.734731 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:45.235194 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:45.734418 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:46.235185 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:46.734375 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:47.234816 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:47.734819 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:48.234305 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:48.734481 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:49.234799 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:48.815093 1108527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 20:18:48.834588 1108527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 20:18:48.948272 1108527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 20:18:49.068641 1108527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 20:18:49.083298 1108527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 20:18:49.103072 1108527 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 20:18:49.103153 1108527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 20:18:49.114192 1108527 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 20:18:49.114266 1108527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 20:18:49.125795 1108527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 20:18:49.138684 1108527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 20:18:49.150720 1108527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 20:18:49.163940 1108527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 20:18:49.174331 1108527 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 20:18:49.174402 1108527 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 20:18:49.189675 1108527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 20:18:49.200428 1108527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 20:18:49.333929 1108527 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 20:18:49.526131 1108527 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 20:18:49.526249 1108527 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 20:18:49.537581 1108527 start.go:537] Will wait 60s for crictl version
	I0717 20:18:49.537645 1108527 ssh_runner.go:195] Run: which crictl
	I0717 20:18:49.542464 1108527 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:18:49.577624 1108527 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 20:18:49.577728 1108527 ssh_runner.go:195] Run: crio --version
	I0717 20:18:49.633461 1108527 ssh_runner.go:195] Run: crio --version
	I0717 20:18:49.701995 1108527 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 20:18:49.704499 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetIP
	I0717 20:18:49.707326 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:49.707733 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:18:49.707777 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:18:49.708037 1108527 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 20:18:49.713323 1108527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:18:49.728250 1108527 localpath.go:92] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/client.crt -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/client.crt
	I0717 20:18:49.728394 1108527 localpath.go:117] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/client.key -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/client.key
	I0717 20:18:49.728505 1108527 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 20:18:49.728556 1108527 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:18:49.765623 1108527 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 20:18:49.765714 1108527 ssh_runner.go:195] Run: which lz4
	I0717 20:18:49.770881 1108527 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 20:18:49.776242 1108527 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 20:18:49.776281 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 20:18:51.612918 1108527 crio.go:444] Took 1.842074 seconds to copy over tarball
	I0717 20:18:51.613060 1108527 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 20:18:49.734686 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:50.234825 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:50.735194 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:51.234218 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:51.734824 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:52.234562 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:52.735244 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:53.234858 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:53.734524 1107949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:18:53.904872 1107949 kubeadm.go:1081] duration metric: took 13.074329481s to wait for elevateKubeSystemPrivileges.
	I0717 20:18:53.904916 1107949 kubeadm.go:406] StartCluster complete in 26.878294757s
	I0717 20:18:53.904941 1107949 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:18:53.905043 1107949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:18:53.907573 1107949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:18:53.911589 1107949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 20:18:53.911918 1107949 config.go:182] Loaded profile config "auto-395471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:18:53.911974 1107949 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 20:18:53.912077 1107949 addons.go:69] Setting storage-provisioner=true in profile "auto-395471"
	I0717 20:18:53.912104 1107949 addons.go:231] Setting addon storage-provisioner=true in "auto-395471"
	I0717 20:18:53.912167 1107949 host.go:66] Checking if "auto-395471" exists ...
	I0717 20:18:53.912596 1107949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:18:53.912657 1107949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:18:53.912805 1107949 addons.go:69] Setting default-storageclass=true in profile "auto-395471"
	I0717 20:18:53.912875 1107949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-395471"
	I0717 20:18:53.913341 1107949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:18:53.913386 1107949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:18:53.940863 1107949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38445
	I0717 20:18:53.941018 1107949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44913
	I0717 20:18:53.941675 1107949 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:18:53.941925 1107949 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:18:53.942599 1107949 main.go:141] libmachine: Using API Version  1
	I0717 20:18:53.942633 1107949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:18:53.942796 1107949 main.go:141] libmachine: Using API Version  1
	I0717 20:18:53.942832 1107949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:18:53.943012 1107949 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:18:53.943337 1107949 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:18:53.943400 1107949 main.go:141] libmachine: (auto-395471) Calling .GetState
	I0717 20:18:53.944436 1107949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:18:53.944496 1107949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:18:53.957692 1107949 addons.go:231] Setting addon default-storageclass=true in "auto-395471"
	I0717 20:18:53.957744 1107949 host.go:66] Checking if "auto-395471" exists ...
	I0717 20:18:53.958043 1107949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:18:53.958092 1107949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:18:53.968598 1107949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41699
	I0717 20:18:53.969369 1107949 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:18:53.970885 1107949 main.go:141] libmachine: Using API Version  1
	I0717 20:18:53.970911 1107949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:18:53.971409 1107949 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:18:53.972067 1107949 main.go:141] libmachine: (auto-395471) Calling .GetState
	I0717 20:18:53.974385 1107949 main.go:141] libmachine: (auto-395471) Calling .DriverName
	I0717 20:18:53.977040 1107949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:18:53.979295 1107949 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:18:53.979322 1107949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 20:18:53.979354 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHHostname
	I0717 20:18:53.980320 1107949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46069
	I0717 20:18:53.981314 1107949 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:18:53.982068 1107949 main.go:141] libmachine: Using API Version  1
	I0717 20:18:53.982091 1107949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:18:53.982670 1107949 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:18:53.983382 1107949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:18:53.983422 1107949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:18:53.985026 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:53.985948 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:53.985973 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:53.986223 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHPort
	I0717 20:18:53.989368 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:53.989824 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHUsername
	I0717 20:18:53.993367 1107949 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471/id_rsa Username:docker}
	I0717 20:18:54.007005 1107949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0717 20:18:54.007600 1107949 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:18:54.008274 1107949 main.go:141] libmachine: Using API Version  1
	I0717 20:18:54.008304 1107949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:18:54.008788 1107949 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:18:54.009065 1107949 main.go:141] libmachine: (auto-395471) Calling .GetState
	I0717 20:18:54.011702 1107949 main.go:141] libmachine: (auto-395471) Calling .DriverName
	I0717 20:18:54.012050 1107949 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 20:18:54.012072 1107949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 20:18:54.012095 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHHostname
	I0717 20:18:54.016437 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:54.016866 1107949 main.go:141] libmachine: (auto-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:ba:8c", ip: ""} in network mk-auto-395471: {Iface:virbr1 ExpiryTime:2023-07-17 21:18:10 +0000 UTC Type:0 Mac:52:54:00:b3:ba:8c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:auto-395471 Clientid:01:52:54:00:b3:ba:8c}
	I0717 20:18:54.016906 1107949 main.go:141] libmachine: (auto-395471) DBG | domain auto-395471 has defined IP address 192.168.50.3 and MAC address 52:54:00:b3:ba:8c in network mk-auto-395471
	I0717 20:18:54.017490 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHPort
	I0717 20:18:54.017746 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHKeyPath
	I0717 20:18:54.017962 1107949 main.go:141] libmachine: (auto-395471) Calling .GetSSHUsername
	I0717 20:18:54.018119 1107949 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/auto-395471/id_rsa Username:docker}
	I0717 20:18:54.199111 1107949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:18:54.230875 1107949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 20:18:54.245868 1107949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 20:18:54.569837 1107949 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-395471" context rescaled to 1 replicas
	I0717 20:18:54.569933 1107949 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.50.3 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:18:54.572382 1107949 out.go:177] * Verifying Kubernetes components...
	I0717 20:18:55.203118 1108527 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.590023868s)
	I0717 20:18:55.203168 1108527 crio.go:451] Took 3.590211 seconds to extract the tarball
	I0717 20:18:55.203225 1108527 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 20:18:55.247339 1108527 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:18:55.329211 1108527 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 20:18:55.329245 1108527 cache_images.go:84] Images are preloaded, skipping loading
	I0717 20:18:55.329328 1108527 ssh_runner.go:195] Run: crio config
	I0717 20:18:55.394247 1108527 cni.go:84] Creating CNI manager for "flannel"
	I0717 20:18:55.394302 1108527 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 20:18:55.394335 1108527 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.151 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-395471 NodeName:flannel-395471 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 20:18:55.394613 1108527 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-395471"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 20:18:55.394763 1108527 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=flannel-395471 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:flannel-395471 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:}
	I0717 20:18:55.394881 1108527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 20:18:55.408536 1108527 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 20:18:55.408640 1108527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 20:18:55.426722 1108527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0717 20:18:55.456384 1108527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 20:18:55.477235 1108527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0717 20:18:55.497226 1108527 ssh_runner.go:195] Run: grep 192.168.61.151	control-plane.minikube.internal$ /etc/hosts
	I0717 20:18:55.501726 1108527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:18:55.516785 1108527 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471 for IP: 192.168.61.151
	I0717 20:18:55.516832 1108527 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:18:55.517043 1108527 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 20:18:55.517115 1108527 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 20:18:55.517227 1108527 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/client.key
	I0717 20:18:55.517258 1108527 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/apiserver.key.c009ae4a
	I0717 20:18:55.517279 1108527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/apiserver.crt.c009ae4a with IP's: [192.168.61.151 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 20:18:55.805801 1108527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/apiserver.crt.c009ae4a ...
	I0717 20:18:55.805844 1108527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/apiserver.crt.c009ae4a: {Name:mkdf449aeba4845390c25eddcb41a0f04f8d06e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:18:55.806078 1108527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/apiserver.key.c009ae4a ...
	I0717 20:18:55.806097 1108527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/apiserver.key.c009ae4a: {Name:mkd0176187503d31ada48d9740e55b14d4491593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:18:55.806208 1108527 certs.go:337] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/apiserver.crt.c009ae4a -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/apiserver.crt
	I0717 20:18:55.806301 1108527 certs.go:341] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/apiserver.key.c009ae4a -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/apiserver.key
	I0717 20:18:55.806370 1108527 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/proxy-client.key
	I0717 20:18:55.806390 1108527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/proxy-client.crt with IP's: []
	I0717 20:18:56.008484 1108527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/proxy-client.crt ...
	I0717 20:18:56.008527 1108527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/proxy-client.crt: {Name:mk81685aaf776570615473538c347cf658eb5467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:18:56.008743 1108527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/proxy-client.key ...
	I0717 20:18:56.008765 1108527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/proxy-client.key: {Name:mk1ce11dba74636e81c4013d714b1782723276a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:18:56.008998 1108527 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 20:18:56.009047 1108527 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 20:18:56.009060 1108527 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 20:18:56.009084 1108527 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 20:18:56.009107 1108527 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 20:18:56.009131 1108527 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 20:18:56.009170 1108527 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 20:18:56.009978 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 20:18:56.039864 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 20:18:56.067236 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 20:18:56.102809 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/flannel-395471/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 20:18:56.136128 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 20:18:56.166179 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 20:18:56.196829 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 20:18:56.228024 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 20:18:56.259652 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 20:18:56.287798 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 20:18:56.314970 1108527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 20:18:56.343293 1108527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 20:18:56.364262 1108527 ssh_runner.go:195] Run: openssl version
	I0717 20:18:56.370996 1108527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 20:18:56.382202 1108527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 20:18:56.388126 1108527 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 20:18:56.388209 1108527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 20:18:56.395030 1108527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 20:18:56.406906 1108527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 20:18:56.418066 1108527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:18:56.424122 1108527 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:18:56.424206 1108527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:18:56.430870 1108527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 20:18:56.442310 1108527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 20:18:56.454217 1108527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 20:18:56.461203 1108527 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 20:18:56.461303 1108527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 20:18:56.469623 1108527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 20:18:56.480911 1108527 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 20:18:56.488028 1108527 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 20:18:56.488105 1108527 kubeadm.go:404] StartCluster: {Name:flannel-395471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:flannel-395471 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:18:56.488284 1108527 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 20:18:56.488369 1108527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 20:18:56.526130 1108527 cri.go:89] found id: ""
	I0717 20:18:56.526220 1108527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 20:18:56.536463 1108527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:18:56.546215 1108527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:18:56.555793 1108527 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:18:56.555845 1108527 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 20:18:56.623714 1108527 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 20:18:56.623862 1108527 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:18:56.769550 1108527 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:18:56.769768 1108527 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:18:56.769921 1108527 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:18:57.000987 1108527 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:18:54.574076 1107949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:18:57.392210 1107949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.193042933s)
	I0717 20:18:57.392275 1107949 main.go:141] libmachine: Making call to close driver server
	I0717 20:18:57.392274 1107949 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.146359614s)
	I0717 20:18:57.392213 1107949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.161291621s)
	I0717 20:18:57.392303 1107949 start.go:917] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0717 20:18:57.392310 1107949 main.go:141] libmachine: Making call to close driver server
	I0717 20:18:57.392347 1107949 main.go:141] libmachine: (auto-395471) Calling .Close
	I0717 20:18:57.392358 1107949 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.818253651s)
	I0717 20:18:57.392289 1107949 main.go:141] libmachine: (auto-395471) Calling .Close
	I0717 20:18:57.392705 1107949 main.go:141] libmachine: (auto-395471) DBG | Closing plugin on server side
	I0717 20:18:57.392740 1107949 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:18:57.392741 1107949 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:18:57.392756 1107949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:18:57.392757 1107949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:18:57.392767 1107949 main.go:141] libmachine: Making call to close driver server
	I0717 20:18:57.392772 1107949 main.go:141] libmachine: Making call to close driver server
	I0717 20:18:57.392776 1107949 main.go:141] libmachine: (auto-395471) Calling .Close
	I0717 20:18:57.392781 1107949 main.go:141] libmachine: (auto-395471) Calling .Close
	I0717 20:18:57.394396 1107949 node_ready.go:35] waiting up to 15m0s for node "auto-395471" to be "Ready" ...
	I0717 20:18:57.394529 1107949 main.go:141] libmachine: (auto-395471) DBG | Closing plugin on server side
	I0717 20:18:57.394549 1107949 main.go:141] libmachine: (auto-395471) DBG | Closing plugin on server side
	I0717 20:18:57.394559 1107949 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:18:57.394588 1107949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:18:57.394605 1107949 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:18:57.394670 1107949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:18:57.394608 1107949 main.go:141] libmachine: Making call to close driver server
	I0717 20:18:57.394722 1107949 main.go:141] libmachine: (auto-395471) Calling .Close
	I0717 20:18:57.396419 1107949 main.go:141] libmachine: (auto-395471) DBG | Closing plugin on server side
	I0717 20:18:57.396420 1107949 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:18:57.396449 1107949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:18:57.399031 1107949 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 20:18:57.106829 1108527 out.go:204]   - Generating certificates and keys ...
	I0717 20:18:57.107023 1108527 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:18:57.107216 1108527 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:18:57.270961 1108527 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 20:18:57.552382 1108527 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 20:18:57.786066 1108527 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 20:18:58.145806 1108527 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 20:18:58.285269 1108527 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 20:18:58.285513 1108527 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [flannel-395471 localhost] and IPs [192.168.61.151 127.0.0.1 ::1]
	I0717 20:18:58.595126 1108527 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 20:18:58.595411 1108527 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [flannel-395471 localhost] and IPs [192.168.61.151 127.0.0.1 ::1]
	I0717 20:18:57.400965 1107949 addons.go:502] enable addons completed in 3.488985043s: enabled=[storage-provisioner default-storageclass]
	I0717 20:18:57.431107 1107949 node_ready.go:49] node "auto-395471" has status "Ready":"True"
	I0717 20:18:57.431144 1107949 node_ready.go:38] duration metric: took 36.719984ms waiting for node "auto-395471" to be "Ready" ...
	I0717 20:18:57.431157 1107949 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:18:57.450174 1107949 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5d78c9869d-5q5nc" in "kube-system" namespace to be "Ready" ...
	I0717 20:18:58.898674 1108527 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 20:18:59.088387 1108527 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 20:18:59.184266 1108527 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 20:18:59.184419 1108527 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:18:59.329133 1108527 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:18:59.404702 1108527 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:18:59.700103 1108527 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:18:59.859353 1108527 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:18:59.875683 1108527 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:18:59.879470 1108527 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:18:59.879556 1108527 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 20:19:00.008121 1108527 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:19:00.010738 1108527 out.go:204]   - Booting up control plane ...
	I0717 20:19:00.010915 1108527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:19:00.011285 1108527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:19:00.014899 1108527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:19:00.015843 1108527 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:19:00.018243 1108527 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 20:18:59.491669 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-5q5nc" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:01.989431 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-5q5nc" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:03.990716 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-5q5nc" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:06.489960 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-5q5nc" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:08.490269 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-5q5nc" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:08.987963 1107949 pod_ready.go:97] pod "coredns-5d78c9869d-5q5nc" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 20:18:54 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 20:18:54 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 20:18:54 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 20:18:54 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.3 PodIP: PodIPs:[] StartTime:2023-07-17 20:18:54 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminate
d{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-07-17 20:18:58 +0000 UTC,FinishedAt:2023-07-17 20:19:08 +0000 UTC,ContainerID:cri-o://6cbcca0c0237ab15f3ef9ecccc0a643c510abe1a13438054b88ed6f52a1e2b66,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://6cbcca0c0237ab15f3ef9ecccc0a643c510abe1a13438054b88ed6f52a1e2b66 Started:0xc000fcf210 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0717 20:19:08.987998 1107949 pod_ready.go:81] duration metric: took 11.537787738s waiting for pod "coredns-5d78c9869d-5q5nc" in "kube-system" namespace to be "Ready" ...
	E0717 20:19:08.988008 1107949 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-5q5nc" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 20:18:54 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 20:18:54 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 20:18:54 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 20:18:54 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.3 PodIP: PodIPs:[] StartTime:2023-07-17 20:18:54 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Termin
ated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-07-17 20:18:58 +0000 UTC,FinishedAt:2023-07-17 20:19:08 +0000 UTC,ContainerID:cri-o://6cbcca0c0237ab15f3ef9ecccc0a643c510abe1a13438054b88ed6f52a1e2b66,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://6cbcca0c0237ab15f3ef9ecccc0a643c510abe1a13438054b88ed6f52a1e2b66 Started:0xc000fcf210 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0717 20:19:08.988015 1107949 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5d78c9869d-tv2pq" in "kube-system" namespace to be "Ready" ...
	I0717 20:19:09.022850 1108527 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004517 seconds
	I0717 20:19:09.022994 1108527 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 20:19:09.043346 1108527 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 20:19:09.601822 1108527 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 20:19:09.602136 1108527 kubeadm.go:322] [mark-control-plane] Marking the node flannel-395471 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 20:19:10.120375 1108527 kubeadm.go:322] [bootstrap-token] Using token: imqujy.9tfw8kdykte6fd5u
	I0717 20:19:10.122742 1108527 out.go:204]   - Configuring RBAC rules ...
	I0717 20:19:10.122933 1108527 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 20:19:10.130973 1108527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 20:19:10.146797 1108527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 20:19:10.163924 1108527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 20:19:10.171489 1108527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 20:19:10.179639 1108527 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 20:19:10.201746 1108527 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 20:19:10.492379 1108527 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 20:19:10.548077 1108527 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 20:19:10.549510 1108527 kubeadm.go:322] 
	I0717 20:19:10.549687 1108527 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 20:19:10.549713 1108527 kubeadm.go:322] 
	I0717 20:19:10.549800 1108527 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 20:19:10.549817 1108527 kubeadm.go:322] 
	I0717 20:19:10.549851 1108527 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 20:19:10.549924 1108527 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 20:19:10.550009 1108527 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 20:19:10.550019 1108527 kubeadm.go:322] 
	I0717 20:19:10.550092 1108527 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 20:19:10.550106 1108527 kubeadm.go:322] 
	I0717 20:19:10.550196 1108527 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 20:19:10.550212 1108527 kubeadm.go:322] 
	I0717 20:19:10.550285 1108527 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 20:19:10.550385 1108527 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 20:19:10.550474 1108527 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 20:19:10.550484 1108527 kubeadm.go:322] 
	I0717 20:19:10.550607 1108527 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 20:19:10.550717 1108527 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 20:19:10.550727 1108527 kubeadm.go:322] 
	I0717 20:19:10.550842 1108527 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token imqujy.9tfw8kdykte6fd5u \
	I0717 20:19:10.550985 1108527 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 20:19:10.551023 1108527 kubeadm.go:322] 	--control-plane 
	I0717 20:19:10.551033 1108527 kubeadm.go:322] 
	I0717 20:19:10.551141 1108527 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 20:19:10.551156 1108527 kubeadm.go:322] 
	I0717 20:19:10.551281 1108527 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token imqujy.9tfw8kdykte6fd5u \
	I0717 20:19:10.551435 1108527 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 20:19:10.551596 1108527 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 20:19:10.551625 1108527 cni.go:84] Creating CNI manager for "flannel"
	I0717 20:19:10.555841 1108527 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0717 20:19:10.557595 1108527 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 20:19:10.573465 1108527 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 20:19:10.573502 1108527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4615 bytes)
	I0717 20:19:10.616255 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 20:19:12.023670 1108527 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.407373444s)
	I0717 20:19:12.023718 1108527 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 20:19:12.023856 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:12.023883 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=flannel-395471 minikube.k8s.io/updated_at=2023_07_17T20_19_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:12.211478 1108527 ops.go:34] apiserver oom_adj: -16
	I0717 20:19:12.211654 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:12.845686 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:13.345604 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:11.005630 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-tv2pq" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:13.505900 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-tv2pq" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:13.845422 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:14.345725 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:14.846216 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:15.346118 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:15.845995 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:16.345465 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:16.846092 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:17.345726 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:17.846334 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:18.345781 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:15.507446 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-tv2pq" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:18.002347 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-tv2pq" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:18.846359 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:19.346059 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:19.845507 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:20.346135 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:20.846122 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:21.346191 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:21.846172 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:22.346122 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:22.846353 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:23.345775 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:23.845653 1108527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:24.034258 1108527 kubeadm.go:1081] duration metric: took 12.01047779s to wait for elevateKubeSystemPrivileges.
	I0717 20:19:24.034320 1108527 kubeadm.go:406] StartCluster complete in 27.546221005s
	I0717 20:19:24.034345 1108527 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:19:24.034467 1108527 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:19:24.037346 1108527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:19:24.037730 1108527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 20:19:24.037846 1108527 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 20:19:24.037951 1108527 config.go:182] Loaded profile config "flannel-395471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:19:24.037957 1108527 addons.go:69] Setting storage-provisioner=true in profile "flannel-395471"
	I0717 20:19:24.037978 1108527 addons.go:231] Setting addon storage-provisioner=true in "flannel-395471"
	I0717 20:19:24.037990 1108527 addons.go:69] Setting default-storageclass=true in profile "flannel-395471"
	I0717 20:19:24.038039 1108527 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-395471"
	I0717 20:19:24.038067 1108527 host.go:66] Checking if "flannel-395471" exists ...
	I0717 20:19:24.038614 1108527 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:19:24.038638 1108527 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:19:24.038644 1108527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:19:24.038668 1108527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:19:24.059777 1108527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34947
	I0717 20:19:24.059838 1108527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41581
	I0717 20:19:24.060274 1108527 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:19:24.060589 1108527 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:19:24.060907 1108527 main.go:141] libmachine: Using API Version  1
	I0717 20:19:24.060926 1108527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:19:24.061297 1108527 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:19:24.061678 1108527 main.go:141] libmachine: Using API Version  1
	I0717 20:19:24.061707 1108527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:19:24.062188 1108527 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:19:24.062228 1108527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:19:24.063034 1108527 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:19:24.063297 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetState
	I0717 20:19:24.080233 1108527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40165
	I0717 20:19:24.080797 1108527 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:19:24.081880 1108527 main.go:141] libmachine: Using API Version  1
	I0717 20:19:24.081911 1108527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:19:24.082971 1108527 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:19:24.083221 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetState
	I0717 20:19:24.085765 1108527 main.go:141] libmachine: (flannel-395471) Calling .DriverName
	I0717 20:19:24.088879 1108527 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:19:20.502655 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-tv2pq" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:22.503746 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-tv2pq" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:24.090916 1108527 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:19:24.090938 1108527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 20:19:24.090967 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHHostname
	I0717 20:19:24.095196 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:19:24.095600 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:19:24.095632 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:19:24.095832 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHPort
	I0717 20:19:24.096104 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:19:24.096278 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHUsername
	I0717 20:19:24.096419 1108527 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/id_rsa Username:docker}
	I0717 20:19:24.112260 1108527 addons.go:231] Setting addon default-storageclass=true in "flannel-395471"
	I0717 20:19:24.112330 1108527 host.go:66] Checking if "flannel-395471" exists ...
	I0717 20:19:24.112906 1108527 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:19:24.112953 1108527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:19:24.130240 1108527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I0717 20:19:24.130738 1108527 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:19:24.131374 1108527 main.go:141] libmachine: Using API Version  1
	I0717 20:19:24.131394 1108527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:19:24.131898 1108527 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:19:24.132711 1108527 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:19:24.132743 1108527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:19:24.154069 1108527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0717 20:19:24.154564 1108527 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:19:24.155227 1108527 main.go:141] libmachine: Using API Version  1
	I0717 20:19:24.155249 1108527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:19:24.155667 1108527 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:19:24.155915 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetState
	I0717 20:19:24.157879 1108527 main.go:141] libmachine: (flannel-395471) Calling .DriverName
	I0717 20:19:24.159191 1108527 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 20:19:24.159211 1108527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 20:19:24.159234 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHHostname
	I0717 20:19:24.162712 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:19:24.163182 1108527 main.go:141] libmachine: (flannel-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:0a:fd", ip: ""} in network mk-flannel-395471: {Iface:virbr4 ExpiryTime:2023-07-17 21:18:36 +0000 UTC Type:0 Mac:52:54:00:2d:0a:fd Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:flannel-395471 Clientid:01:52:54:00:2d:0a:fd}
	I0717 20:19:24.163228 1108527 main.go:141] libmachine: (flannel-395471) DBG | domain flannel-395471 has defined IP address 192.168.61.151 and MAC address 52:54:00:2d:0a:fd in network mk-flannel-395471
	I0717 20:19:24.163516 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHPort
	I0717 20:19:24.163794 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHKeyPath
	I0717 20:19:24.163972 1108527 main.go:141] libmachine: (flannel-395471) Calling .GetSSHUsername
	I0717 20:19:24.164177 1108527 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/flannel-395471/id_rsa Username:docker}
	I0717 20:19:24.260752 1108527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:19:24.329506 1108527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 20:19:24.364771 1108527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 20:19:24.627506 1108527 kapi.go:248] "coredns" deployment in "kube-system" namespace and "flannel-395471" context rescaled to 1 replicas
	I0717 20:19:24.627562 1108527 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:19:24.630006 1108527 out.go:177] * Verifying Kubernetes components...
	I0717 20:19:24.632055 1108527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:19:25.467176 1108527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.206376589s)
	I0717 20:19:25.467228 1108527 main.go:141] libmachine: Making call to close driver server
	I0717 20:19:25.467240 1108527 main.go:141] libmachine: (flannel-395471) Calling .Close
	I0717 20:19:25.467239 1108527 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.13769181s)
	I0717 20:19:25.467262 1108527 start.go:917] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0717 20:19:25.467385 1108527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.10257819s)
	I0717 20:19:25.467488 1108527 main.go:141] libmachine: Making call to close driver server
	I0717 20:19:25.467507 1108527 main.go:141] libmachine: (flannel-395471) Calling .Close
	I0717 20:19:25.467548 1108527 main.go:141] libmachine: (flannel-395471) DBG | Closing plugin on server side
	I0717 20:19:25.467563 1108527 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:19:25.467578 1108527 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:19:25.467588 1108527 main.go:141] libmachine: Making call to close driver server
	I0717 20:19:25.467598 1108527 main.go:141] libmachine: (flannel-395471) Calling .Close
	I0717 20:19:25.467772 1108527 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:19:25.467788 1108527 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:19:25.467798 1108527 main.go:141] libmachine: Making call to close driver server
	I0717 20:19:25.467808 1108527 main.go:141] libmachine: (flannel-395471) Calling .Close
	I0717 20:19:25.467883 1108527 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:19:25.467902 1108527 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:19:25.467933 1108527 main.go:141] libmachine: (flannel-395471) DBG | Closing plugin on server side
	I0717 20:19:25.469255 1108527 node_ready.go:35] waiting up to 15m0s for node "flannel-395471" to be "Ready" ...
	I0717 20:19:25.469792 1108527 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:19:25.469812 1108527 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:19:25.469826 1108527 main.go:141] libmachine: Making call to close driver server
	I0717 20:19:25.469836 1108527 main.go:141] libmachine: (flannel-395471) Calling .Close
	I0717 20:19:25.470073 1108527 main.go:141] libmachine: (flannel-395471) DBG | Closing plugin on server side
	I0717 20:19:25.470122 1108527 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:19:25.470141 1108527 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:19:25.473928 1108527 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 20:19:25.476352 1108527 addons.go:502] enable addons completed in 1.43849333s: enabled=[storage-provisioner default-storageclass]
	I0717 20:19:27.489359 1108527 node_ready.go:58] node "flannel-395471" has status "Ready":"False"
	I0717 20:19:25.004280 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-tv2pq" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:27.503148 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-tv2pq" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:29.990094 1108527 node_ready.go:58] node "flannel-395471" has status "Ready":"False"
	I0717 20:19:30.495862 1108527 node_ready.go:49] node "flannel-395471" has status "Ready":"True"
	I0717 20:19:30.495895 1108527 node_ready.go:38] duration metric: took 5.026609434s waiting for node "flannel-395471" to be "Ready" ...
	I0717 20:19:30.495906 1108527 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:19:30.505195 1108527 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5d78c9869d-sl9pj" in "kube-system" namespace to be "Ready" ...
	I0717 20:19:32.522517 1108527 pod_ready.go:102] pod "coredns-5d78c9869d-sl9pj" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:29.504380 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-tv2pq" in "kube-system" namespace has status "Ready":"False"
	I0717 20:19:32.003823 1107949 pod_ready.go:102] pod "coredns-5d78c9869d-tv2pq" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 19:58:30 UTC, ends at Mon 2023-07-17 20:19:35 UTC. --
	Jul 17 20:19:34 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:34.818069472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ecb84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},A
nnotations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
427287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
da26d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b01ba1b5-925b-4923-b77d-9ce133305e2b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 20:19:34 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:34.892525338Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=25f3edf4-3e26-4cd8-8b70-cac42572d482 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:34 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:34.892649771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=25f3edf4-3e26-4cd8-8b70-cac42572d482 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:34 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:34.893002398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=25f3edf4-3e26-4cd8-8b70-cac42572d482 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:34 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:34.935127968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=69cde3d9-fbcc-4b52-ad2b-cd155a90be33 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:34 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:34.935243086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=69cde3d9-fbcc-4b52-ad2b-cd155a90be33 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:34 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:34.935606462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=69cde3d9-fbcc-4b52-ad2b-cd155a90be33 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:34 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:34.978159009Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f6dd53e8-1b10-4462-bdf5-4298db14a87a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:34 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:34.978302428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f6dd53e8-1b10-4462-bdf5-4298db14a87a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:34 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:34.978630155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f6dd53e8-1b10-4462-bdf5-4298db14a87a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.027676793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9a895e9b-7f3a-4dad-8309-6a0ba0ab6626 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.027775004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9a895e9b-7f3a-4dad-8309-6a0ba0ab6626 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.028037578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9a895e9b-7f3a-4dad-8309-6a0ba0ab6626 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.070029048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a53c008f-91f7-435d-ab78-2f12ef225407 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.070127218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a53c008f-91f7-435d-ab78-2f12ef225407 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.070380605Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a53c008f-91f7-435d-ab78-2f12ef225407 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.108306722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=01689154-3cb2-4908-a6a0-1e1cf2749cf7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.108519888Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=01689154-3cb2-4908-a6a0-1e1cf2749cf7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.108756941Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=01689154-3cb2-4908-a6a0-1e1cf2749cf7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.150493231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f9bc3ad1-c148-4296-b57b-4338848d34ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.150589424Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f9bc3ad1-c148-4296-b57b-4338848d34ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.150808159Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f9bc3ad1-c148-4296-b57b-4338848d34ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.191913164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4540880a-a711-46bf-ac86-5b75d34496e2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.192006071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4540880a-a711-46bf-ac86-5b75d34496e2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:19:35 default-k8s-diff-port-711413 crio[711]: time="2023-07-17 20:19:35.192255324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689623978088243350,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount:
3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac703c93251a1c625d5c119244b2df7ffaa5b7a3851a204813c22e503a4bb9c,PodSandboxId:61887e5f4fc14f0d25fbffa093f73ced556889fe435a0858ed95a47f7a3e2f5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689623952987320194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49340f84-ca4d-4b97-af9e-87640bf8f354,},Annotations:map[string]string{io.kubernetes.container.hash: f929af23,io.kubernetes.container.restartCount: 1,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524,PodSandboxId:0775100a29b1ec53be6cb56232955fea88a907bad5118d62422a27e0966ba1f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689623951555591856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rjqsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f27e2de9-9849-40e2-b6dc-1ee27537b1e6,},Annotations:map[string]string{io.kubernetes.container.hash: 691dbde2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88,PodSandboxId:a4576d8c267801d343e59914c2c8e248871e45fbf878fc20bdb0931a46ca7240,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689623947574728622,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 43705714-97f1-4b06-8eeb-04d60c22112a,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaf554,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951,PodSandboxId:acc4e1b28ae1cbd938d5dae4a844a05115d84c2c05ed0806fba92cd0a19ceeaf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689623947494839330,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9qfpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecb
84bb9-57a2-4a42-8104-b792d38479ca,},Annotations:map[string]string{io.kubernetes.container.hash: fc6896a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261,PodSandboxId:049c6810e8b181a0e1f8bb999623af3524eff314807d3aba18e29aa8d0a2d4f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689623938394977500,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a6acc96761078f5e3f112b985fb36ac,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2,PodSandboxId:acc8fd3a376672b9813f427e64d8501593644c51fe03cfd3ae8972897ba16597,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689623938368769020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb6983b48c57f61fd65c2f60ecf99612,},Anno
tations:map[string]string{io.kubernetes.container.hash: eff98e2d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6,PodSandboxId:a6fea49e65dd13d4622e3427f23f2297d5b80538763c6ff6fe2dca187928b337,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689623938200355022,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 427
287902605e4c35be8b9f387563dd1,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a,PodSandboxId:60e19ea94bfee45cf0163945a8cac877b9bddcbe848976f76a6c98319c8167c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689623937828074425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-711413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da2
6d20fa8eb535904c512a72aaf9f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 90d81152,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4540880a-a711-46bf-ac86-5b75d34496e2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	19f50eeeb11e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       3                   a4576d8c26780
	5ac703c93251a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   61887e5f4fc14
	cb8cdd2d3f50b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      20 minutes ago      Running             coredns                   1                   0775100a29b1e
	4a47132787243       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       2                   a4576d8c26780
	76ea7912be2a5       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      20 minutes ago      Running             kube-proxy                1                   acc4e1b28ae1c
	9790a6abc4658       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      20 minutes ago      Running             kube-scheduler            1                   049c6810e8b18
	bb86b8e5369c2       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      20 minutes ago      Running             etcd                      1                   acc8fd3a37667
	280d9b31ea5e8       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      20 minutes ago      Running             kube-controller-manager   1                   a6fea49e65dd1
	210ff04a86d98       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      20 minutes ago      Running             kube-apiserver            1                   60e19ea94bfee
	
	* 
	* ==> coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37471 - 39720 "HINFO IN 1862711990285091975.8658787963171313958. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010230826s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-711413
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-711413
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=default-k8s-diff-port-711413
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T19_50_25_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:50:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-711413
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 20:19:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 20:14:54 +0000   Mon, 17 Jul 2023 19:50:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 20:14:54 +0000   Mon, 17 Jul 2023 19:50:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 20:14:54 +0000   Mon, 17 Jul 2023 19:50:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 20:14:54 +0000   Mon, 17 Jul 2023 19:59:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.51
	  Hostname:    default-k8s-diff-port-711413
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 98afff5ed7e644b585b6493e16507063
	  System UUID:                98afff5e-d7e6-44b5-85b6-493e16507063
	  Boot ID:                    7d80f073-64da-4970-ac03-47f3d9fd982d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5d78c9869d-rjqsv                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-711413                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-711413             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-711413    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-9qfpg                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-711413             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-74d5c6b9c-hzcd7                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-711413 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-711413 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-711413 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-711413 status is now: NodeReady
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-711413 event: Registered Node default-k8s-diff-port-711413 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-711413 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-711413 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-711413 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-711413 event: Registered Node default-k8s-diff-port-711413 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul17 19:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074877] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.451778] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.692009] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.159967] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.522211] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.314538] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.157240] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.174828] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.135536] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.262036] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +17.346326] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[Jul17 19:59] kauditd_printk_skb: 29 callbacks suppressed
	
	* 
	* ==> etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] <==
	* {"level":"info","ts":"2023-07-17T20:09:01.772Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":816,"took":"1.853843ms","hash":3863156796}
	{"level":"info","ts":"2023-07-17T20:09:01.772Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3863156796,"revision":816,"compact-revision":-1}
	{"level":"info","ts":"2023-07-17T20:14:01.784Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1058}
	{"level":"info","ts":"2023-07-17T20:14:01.789Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1058,"took":"4.263031ms","hash":1825891744}
	{"level":"info","ts":"2023-07-17T20:14:01.789Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1825891744,"revision":1058,"compact-revision":816}
	{"level":"warn","ts":"2023-07-17T20:18:26.419Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"277.249339ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16714698135784191740 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.51\" mod_revision:1506 > success:<request_put:<key:\"/registry/masterleases/192.168.72.51\" value_size:66 lease:7491326098929415930 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.51\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-07-17T20:18:26.419Z","caller":"traceutil/trace.go:171","msg":"trace[1843827299] linearizableReadLoop","detail":"{readStateIndex:1789; appliedIndex:1788; }","duration":"250.918403ms","start":"2023-07-17T20:18:26.168Z","end":"2023-07-17T20:18:26.419Z","steps":["trace[1843827299] 'read index received'  (duration: 29.468µs)","trace[1843827299] 'applied index is now lower than readState.Index'  (duration: 250.887649ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T20:18:26.419Z","caller":"traceutil/trace.go:171","msg":"trace[884152812] transaction","detail":"{read_only:false; response_revision:1514; number_of_response:1; }","duration":"399.319815ms","start":"2023-07-17T20:18:26.020Z","end":"2023-07-17T20:18:26.419Z","steps":["trace[884152812] 'process raft request'  (duration: 121.365087ms)","trace[884152812] 'compare'  (duration: 276.429231ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T20:18:26.419Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T20:18:26.020Z","time spent":"399.37269ms","remote":"127.0.0.1:33132","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.72.51\" mod_revision:1506 > success:<request_put:<key:\"/registry/masterleases/192.168.72.51\" value_size:66 lease:7491326098929415930 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.51\" > >"}
	{"level":"warn","ts":"2023-07-17T20:18:26.420Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.497697ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-07-17T20:18:26.420Z","caller":"traceutil/trace.go:171","msg":"trace[1430960209] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:1514; }","duration":"251.709309ms","start":"2023-07-17T20:18:26.168Z","end":"2023-07-17T20:18:26.420Z","steps":["trace[1430960209] 'agreement among raft nodes before linearized reading'  (duration: 251.328631ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T20:18:26.560Z","caller":"traceutil/trace.go:171","msg":"trace[44493576] linearizableReadLoop","detail":"{readStateIndex:1790; appliedIndex:1789; }","duration":"140.864617ms","start":"2023-07-17T20:18:26.419Z","end":"2023-07-17T20:18:26.560Z","steps":["trace[44493576] 'read index received'  (duration: 139.027514ms)","trace[44493576] 'applied index is now lower than readState.Index'  (duration: 1.836378ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T20:18:26.560Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.900674ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-07-17T20:18:26.560Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.135514ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:421"}
	{"level":"info","ts":"2023-07-17T20:18:26.561Z","caller":"traceutil/trace.go:171","msg":"trace[63906097] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:1515; }","duration":"139.200565ms","start":"2023-07-17T20:18:26.421Z","end":"2023-07-17T20:18:26.561Z","steps":["trace[63906097] 'agreement among raft nodes before linearized reading'  (duration: 139.097563ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T20:18:26.561Z","caller":"traceutil/trace.go:171","msg":"trace[1379968952] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1515; }","duration":"142.969497ms","start":"2023-07-17T20:18:26.418Z","end":"2023-07-17T20:18:26.560Z","steps":["trace[1379968952] 'agreement among raft nodes before linearized reading'  (duration: 142.850869ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T20:18:26.561Z","caller":"traceutil/trace.go:171","msg":"trace[1580777996] transaction","detail":"{read_only:false; response_revision:1515; number_of_response:1; }","duration":"270.267883ms","start":"2023-07-17T20:18:26.290Z","end":"2023-07-17T20:18:26.561Z","steps":["trace[1580777996] 'process raft request'  (duration: 267.931679ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T20:18:55.527Z","caller":"traceutil/trace.go:171","msg":"trace[239494769] transaction","detail":"{read_only:false; response_revision:1537; number_of_response:1; }","duration":"101.363525ms","start":"2023-07-17T20:18:55.425Z","end":"2023-07-17T20:18:55.527Z","steps":["trace[239494769] 'process raft request'  (duration: 100.613579ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T20:18:56.343Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.882316ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16714698135784191894 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.51\" mod_revision:1530 > success:<request_put:<key:\"/registry/masterleases/192.168.72.51\" value_size:66 lease:7491326098929416084 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.51\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-07-17T20:18:56.343Z","caller":"traceutil/trace.go:171","msg":"trace[1722733708] transaction","detail":"{read_only:false; response_revision:1538; number_of_response:1; }","duration":"267.046905ms","start":"2023-07-17T20:18:56.076Z","end":"2023-07-17T20:18:56.343Z","steps":["trace[1722733708] 'process raft request'  (duration: 125.790842ms)","trace[1722733708] 'compare'  (duration: 140.702724ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T20:18:56.638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.154764ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T20:18:56.638Z","caller":"traceutil/trace.go:171","msg":"trace[963623841] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1538; }","duration":"163.740596ms","start":"2023-07-17T20:18:56.475Z","end":"2023-07-17T20:18:56.638Z","steps":["trace[963623841] 'range keys from in-memory index tree'  (duration: 163.064956ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T20:19:01.799Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1301}
	{"level":"info","ts":"2023-07-17T20:19:01.801Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1301,"took":"1.041391ms","hash":303700750}
	{"level":"info","ts":"2023-07-17T20:19:01.801Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":303700750,"revision":1301,"compact-revision":1058}
	
	* 
	* ==> kernel <==
	*  20:19:35 up 21 min,  0 users,  load average: 0.18, 0.18, 0.18
	Linux default-k8s-diff-port-711413 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] <==
	* I0717 20:15:05.060054       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:16:03.767498       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.30.24:443: connect: connection refused
	I0717 20:16:03.767566       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 20:17:03.766796       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.30.24:443: connect: connection refused
	I0717 20:17:03.766927       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 20:17:05.059356       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:17:05.059508       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:17:05.059521       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 20:17:05.060859       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:17:05.060954       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:17:05.060962       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:18:03.767244       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.30.24:443: connect: connection refused
	I0717 20:18:03.767338       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 20:19:03.766923       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.30.24:443: connect: connection refused
	I0717 20:19:03.767089       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 20:19:04.064955       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.30.24:443: connect: connection refused
	I0717 20:19:04.065064       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 20:19:05.064181       1 handler_proxy.go:100] no RequestInfo found in the context
	W0717 20:19:05.064180       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:19:05.064511       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:19:05.064529       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0717 20:19:05.064563       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:19:05.065949       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] <==
	* W0717 20:13:17.607218       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:13:47.086512       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:13:47.616950       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:14:17.098026       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:14:17.628158       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:14:47.104317       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:14:47.645634       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:15:17.111847       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:15:17.664809       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:15:47.118829       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:15:47.675156       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:16:17.127300       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:16:17.684652       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:16:47.133638       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:16:47.693756       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:17:17.139899       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:17:17.708346       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:17:47.147001       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:17:47.718712       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:18:17.156744       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:18:17.730146       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:18:47.164457       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:18:47.740920       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:19:17.170589       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:19:17.750857       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] <==
	* I0717 19:59:08.210790       1 node.go:141] Successfully retrieved node IP: 192.168.72.51
	I0717 19:59:08.211295       1 server_others.go:110] "Detected node IP" address="192.168.72.51"
	I0717 19:59:08.211519       1 server_others.go:554] "Using iptables proxy"
	I0717 19:59:08.355356       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 19:59:08.355524       1 server_others.go:192] "Using iptables Proxier"
	I0717 19:59:08.355604       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:59:08.356957       1 server.go:658] "Version info" version="v1.27.3"
	I0717 19:59:08.357111       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:59:08.358242       1 config.go:188] "Starting service config controller"
	I0717 19:59:08.358642       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 19:59:08.359245       1 config.go:97] "Starting endpoint slice config controller"
	I0717 19:59:08.397649       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 19:59:08.397701       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 19:59:08.360028       1 config.go:315] "Starting node config controller"
	I0717 19:59:08.397746       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 19:59:08.398001       1 shared_informer.go:318] Caches are synced for node config
	I0717 19:59:08.496231       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] <==
	* W0717 19:59:04.027922       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 19:59:04.027937       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 19:59:04.028112       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 19:59:04.028131       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 19:59:04.033778       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 19:59:04.033851       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 19:59:04.038049       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 19:59:04.038120       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 19:59:04.038217       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 19:59:04.038231       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 19:59:04.038270       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 19:59:04.038279       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 19:59:04.038333       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 19:59:04.038342       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 19:59:04.038386       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 19:59:04.038477       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 19:59:04.038491       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 19:59:04.038500       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 19:59:04.038508       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 19:59:04.038519       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 19:59:04.038673       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 19:59:04.038686       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:59:04.046828       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 19:59:04.046917       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0717 19:59:05.202572       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 19:58:30 UTC, ends at Mon 2023-07-17 20:19:35 UTC. --
	Jul 17 20:16:55 default-k8s-diff-port-711413 kubelet[917]: E0717 20:16:55.816860     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:16:56 default-k8s-diff-port-711413 kubelet[917]: E0717 20:16:56.833332     917 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:16:56 default-k8s-diff-port-711413 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:16:56 default-k8s-diff-port-711413 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:16:56 default-k8s-diff-port-711413 kubelet[917]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:17:08 default-k8s-diff-port-711413 kubelet[917]: E0717 20:17:08.819772     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:17:20 default-k8s-diff-port-711413 kubelet[917]: E0717 20:17:20.816590     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:17:35 default-k8s-diff-port-711413 kubelet[917]: E0717 20:17:35.816814     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:17:50 default-k8s-diff-port-711413 kubelet[917]: E0717 20:17:50.817894     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:17:56 default-k8s-diff-port-711413 kubelet[917]: E0717 20:17:56.837034     917 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:17:56 default-k8s-diff-port-711413 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:17:56 default-k8s-diff-port-711413 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:17:56 default-k8s-diff-port-711413 kubelet[917]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:18:03 default-k8s-diff-port-711413 kubelet[917]: E0717 20:18:03.818045     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:18:16 default-k8s-diff-port-711413 kubelet[917]: E0717 20:18:16.817627     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:18:30 default-k8s-diff-port-711413 kubelet[917]: E0717 20:18:30.818095     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:18:43 default-k8s-diff-port-711413 kubelet[917]: E0717 20:18:43.817317     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:18:56 default-k8s-diff-port-711413 kubelet[917]: E0717 20:18:56.800169     917 container_manager_linux.go:515] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jul 17 20:18:56 default-k8s-diff-port-711413 kubelet[917]: E0717 20:18:56.836327     917 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:18:56 default-k8s-diff-port-711413 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:18:56 default-k8s-diff-port-711413 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:18:56 default-k8s-diff-port-711413 kubelet[917]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:18:58 default-k8s-diff-port-711413 kubelet[917]: E0717 20:18:58.817247     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:19:09 default-k8s-diff-port-711413 kubelet[917]: E0717 20:19:09.818640     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	Jul 17 20:19:24 default-k8s-diff-port-711413 kubelet[917]: E0717 20:19:24.818208     917 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-hzcd7" podUID=17e01399-9910-4f01-abe7-3eae271af1db
	
	* 
	* ==> storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] <==
	* I0717 19:59:38.240874       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:59:38.253804       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:59:38.253902       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 19:59:55.678479       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 19:59:55.678863       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-711413_fadc3d2d-e6d2-4a65-a4d5-0c0e40183736!
	I0717 19:59:55.679128       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"16b5bc07-9934-4f7c-b344-8a0ca0c9f59e", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-711413_fadc3d2d-e6d2-4a65-a4d5-0c0e40183736 became leader
	I0717 19:59:55.780258       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-711413_fadc3d2d-e6d2-4a65-a4d5-0c0e40183736!
	
	* 
	* ==> storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] <==
	* I0717 19:59:07.997756       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 19:59:38.000516       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-711413 -n default-k8s-diff-port-711413
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-711413 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-hzcd7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-711413 describe pod metrics-server-74d5c6b9c-hzcd7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-711413 describe pod metrics-server-74d5c6b9c-hzcd7: exit status 1 (77.765418ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-hzcd7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-711413 describe pod metrics-server-74d5c6b9c-hzcd7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (423.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (143.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 20:16:00.134029 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 20:16:03.519740 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-149000 -n old-k8s-version-149000
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-07-17 20:17:49.774752711 +0000 UTC m=+5671.083448262
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-149000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-149000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.061µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-149000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-149000 -n old-k8s-version-149000
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-149000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-149000 logs -n 25: (1.797524717s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-408472             | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:50 UTC | 17 Jul 23 19:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-408472                                   | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:50 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-711413  | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC |                     |
	|         | default-k8s-diff-port-711413                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-891260             | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-891260                  | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-891260 --memory=2200 --alsologtostderr   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:51 UTC | 17 Jul 23 19:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-891260 sudo                              | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p newest-cni-891260                                   | newest-cni-891260            | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	| delete  | -p                                                     | disable-driver-mounts-178387 | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:52 UTC |
	|         | disable-driver-mounts-178387                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC | 17 Jul 23 19:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-149000             | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-149000                              | old-k8s-version-149000       | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-408472                  | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-408472                                   | no-preload-408472            | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC | 17 Jul 23 20:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-711413       | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-711413 | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 20:03 UTC |
	|         | default-k8s-diff-port-711413                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-114855            | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC | 17 Jul 23 19:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-114855                 | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-114855                                  | embed-certs-114855           | jenkins | v1.30.1 | 17 Jul 23 19:57 UTC | 17 Jul 23 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 19:57:15
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:57:15.731358 1103141 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:57:15.731568 1103141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:57:15.731580 1103141 out.go:309] Setting ErrFile to fd 2...
	I0717 19:57:15.731587 1103141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:57:15.731815 1103141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:57:15.732432 1103141 out.go:303] Setting JSON to false
	I0717 19:57:15.733539 1103141 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16787,"bootTime":1689607049,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:57:15.733642 1103141 start.go:138] virtualization: kvm guest
	I0717 19:57:15.737317 1103141 out.go:177] * [embed-certs-114855] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:57:15.739399 1103141 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:57:15.739429 1103141 notify.go:220] Checking for updates...
	I0717 19:57:15.741380 1103141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:57:15.743518 1103141 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:57:15.745436 1103141 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:57:15.747588 1103141 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:57:15.749399 1103141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:57:15.751806 1103141 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:57:15.752284 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:57:15.752344 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:57:15.767989 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42921
	I0717 19:57:15.768411 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:57:15.769006 1103141 main.go:141] libmachine: Using API Version  1
	I0717 19:57:15.769098 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:57:15.769495 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:57:15.769753 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:57:15.770054 1103141 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:57:15.770369 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:57:15.770414 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:57:15.785632 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40597
	I0717 19:57:15.786193 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:57:15.786746 1103141 main.go:141] libmachine: Using API Version  1
	I0717 19:57:15.786780 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:57:15.787144 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:57:15.787366 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:57:15.827764 1103141 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:57:15.829847 1103141 start.go:298] selected driver: kvm2
	I0717 19:57:15.829881 1103141 start.go:880] validating driver "kvm2" against &{Name:embed-certs-114855 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-11
4855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStrin
g:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:57:15.830064 1103141 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:57:15.830818 1103141 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:57:15.830919 1103141 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:57:15.846540 1103141 install.go:137] /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2 version is 1.30.1
	I0717 19:57:15.846983 1103141 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:57:15.847033 1103141 cni.go:84] Creating CNI manager for ""
	I0717 19:57:15.847067 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:57:15.847081 1103141 start_flags.go:319] config:
	{Name:embed-certs-114855 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-114855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:57:15.847306 1103141 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:57:15.849943 1103141 out.go:177] * Starting control plane node embed-certs-114855 in cluster embed-certs-114855
	I0717 19:57:14.309967 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:15.851794 1103141 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:57:15.851858 1103141 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 19:57:15.851874 1103141 cache.go:57] Caching tarball of preloaded images
	I0717 19:57:15.851987 1103141 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:57:15.852001 1103141 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:57:15.852143 1103141 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/config.json ...
	I0717 19:57:15.852383 1103141 start.go:365] acquiring machines lock for embed-certs-114855: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:57:17.381986 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:23.461901 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:26.533953 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:32.613932 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:35.685977 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:41.765852 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:44.837869 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:50.917965 1101908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.177:22: connect: no route to host
	I0717 19:57:53.921775 1102136 start.go:369] acquired machines lock for "no-preload-408472" in 4m25.126407357s
	I0717 19:57:53.921838 1102136 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:57:53.921845 1102136 fix.go:54] fixHost starting: 
	I0717 19:57:53.922267 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:57:53.922309 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:57:53.937619 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0717 19:57:53.938191 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:57:53.938815 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:57:53.938854 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:57:53.939222 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:57:53.939501 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:57:53.939704 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:57:53.941674 1102136 fix.go:102] recreateIfNeeded on no-preload-408472: state=Stopped err=<nil>
	I0717 19:57:53.941732 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	W0717 19:57:53.941961 1102136 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:57:53.944840 1102136 out.go:177] * Restarting existing kvm2 VM for "no-preload-408472" ...
	I0717 19:57:53.919175 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:57:53.919232 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:57:53.921597 1101908 machine.go:91] provisioned docker machine in 4m37.562634254s
	I0717 19:57:53.921653 1101908 fix.go:56] fixHost completed within 4m37.5908464s
	I0717 19:57:53.921659 1101908 start.go:83] releasing machines lock for "old-k8s-version-149000", held for 4m37.590895645s
	W0717 19:57:53.921680 1101908 start.go:688] error starting host: provision: host is not running
	W0717 19:57:53.921815 1101908 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 19:57:53.921826 1101908 start.go:703] Will try again in 5 seconds ...
	I0717 19:57:53.947202 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Start
	I0717 19:57:53.947561 1102136 main.go:141] libmachine: (no-preload-408472) Ensuring networks are active...
	I0717 19:57:53.948787 1102136 main.go:141] libmachine: (no-preload-408472) Ensuring network default is active
	I0717 19:57:53.949254 1102136 main.go:141] libmachine: (no-preload-408472) Ensuring network mk-no-preload-408472 is active
	I0717 19:57:53.949695 1102136 main.go:141] libmachine: (no-preload-408472) Getting domain xml...
	I0717 19:57:53.950763 1102136 main.go:141] libmachine: (no-preload-408472) Creating domain...
	I0717 19:57:55.256278 1102136 main.go:141] libmachine: (no-preload-408472) Waiting to get IP...
	I0717 19:57:55.257164 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:55.257506 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:55.257619 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:55.257495 1103281 retry.go:31] will retry after 210.861865ms: waiting for machine to come up
	I0717 19:57:55.470210 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:55.470771 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:55.470798 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:55.470699 1103281 retry.go:31] will retry after 348.064579ms: waiting for machine to come up
	I0717 19:57:55.820645 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:55.821335 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:55.821366 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:55.821251 1103281 retry.go:31] will retry after 340.460253ms: waiting for machine to come up
	I0717 19:57:56.163913 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:56.164380 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:56.164412 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:56.164331 1103281 retry.go:31] will retry after 551.813243ms: waiting for machine to come up
	I0717 19:57:56.718505 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:56.719004 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:56.719034 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:56.718953 1103281 retry.go:31] will retry after 640.277548ms: waiting for machine to come up
	I0717 19:57:57.360930 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:57.361456 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:57.361485 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:57.361395 1103281 retry.go:31] will retry after 590.296988ms: waiting for machine to come up
	I0717 19:57:57.953399 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:57.953886 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:57.953913 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:57.953811 1103281 retry.go:31] will retry after 884.386688ms: waiting for machine to come up
	I0717 19:57:58.923546 1101908 start.go:365] acquiring machines lock for old-k8s-version-149000: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:57:58.840158 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:58.840577 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:58.840610 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:58.840529 1103281 retry.go:31] will retry after 1.10470212s: waiting for machine to come up
	I0717 19:57:59.947457 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:57:59.947973 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:57:59.948001 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:57:59.947933 1103281 retry.go:31] will retry after 1.338375271s: waiting for machine to come up
	I0717 19:58:01.288616 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:01.289194 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:01.289226 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:01.289133 1103281 retry.go:31] will retry after 1.633127486s: waiting for machine to come up
	I0717 19:58:02.923621 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:02.924330 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:02.924365 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:02.924253 1103281 retry.go:31] will retry after 2.365924601s: waiting for machine to come up
	I0717 19:58:05.291979 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:05.292487 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:05.292519 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:05.292430 1103281 retry.go:31] will retry after 2.846623941s: waiting for machine to come up
	I0717 19:58:08.142536 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:08.143021 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:08.143050 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:08.142961 1103281 retry.go:31] will retry after 3.495052949s: waiting for machine to come up
	I0717 19:58:11.641858 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:11.642358 1102136 main.go:141] libmachine: (no-preload-408472) DBG | unable to find current IP address of domain no-preload-408472 in network mk-no-preload-408472
	I0717 19:58:11.642384 1102136 main.go:141] libmachine: (no-preload-408472) DBG | I0717 19:58:11.642302 1103281 retry.go:31] will retry after 5.256158303s: waiting for machine to come up
	I0717 19:58:18.263277 1102415 start.go:369] acquired machines lock for "default-k8s-diff-port-711413" in 4m14.158154198s
	I0717 19:58:18.263342 1102415 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:58:18.263362 1102415 fix.go:54] fixHost starting: 
	I0717 19:58:18.263897 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:58:18.263950 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:58:18.280719 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33059
	I0717 19:58:18.281241 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:58:18.281819 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:58:18.281845 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:58:18.282261 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:58:18.282489 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:18.282657 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:58:18.284625 1102415 fix.go:102] recreateIfNeeded on default-k8s-diff-port-711413: state=Stopped err=<nil>
	I0717 19:58:18.284655 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	W0717 19:58:18.284839 1102415 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:58:18.288135 1102415 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-711413" ...
	I0717 19:58:16.902597 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.903197 1102136 main.go:141] libmachine: (no-preload-408472) Found IP for machine: 192.168.61.65
	I0717 19:58:16.903226 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has current primary IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.903232 1102136 main.go:141] libmachine: (no-preload-408472) Reserving static IP address...
	I0717 19:58:16.903758 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "no-preload-408472", mac: "52:54:00:36:75:ac", ip: "192.168.61.65"} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:16.903794 1102136 main.go:141] libmachine: (no-preload-408472) Reserved static IP address: 192.168.61.65
	I0717 19:58:16.903806 1102136 main.go:141] libmachine: (no-preload-408472) DBG | skip adding static IP to network mk-no-preload-408472 - found existing host DHCP lease matching {name: "no-preload-408472", mac: "52:54:00:36:75:ac", ip: "192.168.61.65"}
	I0717 19:58:16.903820 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Getting to WaitForSSH function...
	I0717 19:58:16.903830 1102136 main.go:141] libmachine: (no-preload-408472) Waiting for SSH to be available...
	I0717 19:58:16.906385 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.906796 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:16.906833 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:16.906966 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Using SSH client type: external
	I0717 19:58:16.907000 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa (-rw-------)
	I0717 19:58:16.907034 1102136 main.go:141] libmachine: (no-preload-408472) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:58:16.907056 1102136 main.go:141] libmachine: (no-preload-408472) DBG | About to run SSH command:
	I0717 19:58:16.907116 1102136 main.go:141] libmachine: (no-preload-408472) DBG | exit 0
	I0717 19:58:16.998306 1102136 main.go:141] libmachine: (no-preload-408472) DBG | SSH cmd err, output: <nil>: 
	I0717 19:58:16.998744 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetConfigRaw
	I0717 19:58:16.999490 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:17.002697 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.003108 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.003156 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.003405 1102136 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/config.json ...
	I0717 19:58:17.003642 1102136 machine.go:88] provisioning docker machine ...
	I0717 19:58:17.003668 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:17.003989 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetMachineName
	I0717 19:58:17.004208 1102136 buildroot.go:166] provisioning hostname "no-preload-408472"
	I0717 19:58:17.004234 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetMachineName
	I0717 19:58:17.004464 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.007043 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.007337 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.007371 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.007517 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.007730 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.007933 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.008071 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.008245 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:17.008906 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:17.008927 1102136 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-408472 && echo "no-preload-408472" | sudo tee /etc/hostname
	I0717 19:58:17.143779 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-408472
	
	I0717 19:58:17.143816 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.146881 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.147332 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.147384 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.147556 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.147807 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.147990 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.148137 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.148320 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:17.148789 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:17.148811 1102136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-408472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-408472/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-408472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:58:17.279254 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:58:17.279292 1102136 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:58:17.279339 1102136 buildroot.go:174] setting up certificates
	I0717 19:58:17.279375 1102136 provision.go:83] configureAuth start
	I0717 19:58:17.279390 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetMachineName
	I0717 19:58:17.279745 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:17.283125 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.283563 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.283610 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.283837 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.286508 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.286931 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.286975 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.287088 1102136 provision.go:138] copyHostCerts
	I0717 19:58:17.287196 1102136 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:58:17.287210 1102136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:58:17.287299 1102136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:58:17.287430 1102136 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:58:17.287443 1102136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:58:17.287486 1102136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:58:17.287634 1102136 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:58:17.287650 1102136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:58:17.287691 1102136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:58:17.287762 1102136 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.no-preload-408472 san=[192.168.61.65 192.168.61.65 localhost 127.0.0.1 minikube no-preload-408472]
	I0717 19:58:17.492065 1102136 provision.go:172] copyRemoteCerts
	I0717 19:58:17.492172 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:58:17.492209 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.495444 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.495931 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.495971 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.496153 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.496406 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.496605 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.496793 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:58:17.588540 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:58:17.613378 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 19:58:17.638066 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:58:17.662222 1102136 provision.go:86] duration metric: configureAuth took 382.813668ms
	I0717 19:58:17.662267 1102136 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:58:17.662522 1102136 config.go:182] Loaded profile config "no-preload-408472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:58:17.662613 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:17.665914 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.666415 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:17.666475 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:17.666673 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:17.666934 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.667122 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:17.667287 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:17.667466 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:17.667885 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:17.667903 1102136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:58:17.997416 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:58:17.997461 1102136 machine.go:91] provisioned docker machine in 993.802909ms
	I0717 19:58:17.997476 1102136 start.go:300] post-start starting for "no-preload-408472" (driver="kvm2")
	I0717 19:58:17.997490 1102136 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:58:17.997533 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:17.997925 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:58:17.998013 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.000755 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.001185 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.001210 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.001409 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.001682 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.001892 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.002059 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:58:18.093738 1102136 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:58:18.098709 1102136 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:58:18.098744 1102136 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:58:18.098854 1102136 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:58:18.098974 1102136 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:58:18.099098 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:58:18.110195 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:18.135572 1102136 start.go:303] post-start completed in 138.074603ms
	I0717 19:58:18.135628 1102136 fix.go:56] fixHost completed within 24.21376423s
	I0717 19:58:18.135652 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.139033 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.139617 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.139656 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.139847 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.140146 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.140366 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.140612 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.140819 1102136 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:18.141265 1102136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.65 22 <nil> <nil>}
	I0717 19:58:18.141282 1102136 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:58:18.263053 1102136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623898.247474645
	
	I0717 19:58:18.263080 1102136 fix.go:206] guest clock: 1689623898.247474645
	I0717 19:58:18.263096 1102136 fix.go:219] Guest: 2023-07-17 19:58:18.247474645 +0000 UTC Remote: 2023-07-17 19:58:18.135632998 +0000 UTC m=+289.513196741 (delta=111.841647ms)
	I0717 19:58:18.263124 1102136 fix.go:190] guest clock delta is within tolerance: 111.841647ms
	I0717 19:58:18.263132 1102136 start.go:83] releasing machines lock for "no-preload-408472", held for 24.341313825s
	I0717 19:58:18.263184 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.263451 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:18.266352 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.266707 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.266732 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.266920 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.267684 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.267935 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:58:18.268033 1102136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:58:18.268095 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.268205 1102136 ssh_runner.go:195] Run: cat /version.json
	I0717 19:58:18.268249 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:58:18.270983 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271223 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271324 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.271385 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271494 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.271608 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:18.271628 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:18.271697 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.271879 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:58:18.271895 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.272094 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:58:18.272099 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:58:18.272253 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:58:18.272419 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	W0717 19:58:18.395775 1102136 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:58:18.395916 1102136 ssh_runner.go:195] Run: systemctl --version
	I0717 19:58:18.403799 1102136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:58:18.557449 1102136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:58:18.564470 1102136 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:58:18.564580 1102136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:58:18.580344 1102136 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:58:18.580386 1102136 start.go:469] detecting cgroup driver to use...
	I0717 19:58:18.580482 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:58:18.595052 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:58:18.608844 1102136 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:58:18.608948 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:58:18.621908 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:58:18.635796 1102136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:58:18.290375 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Start
	I0717 19:58:18.290615 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Ensuring networks are active...
	I0717 19:58:18.291470 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Ensuring network default is active
	I0717 19:58:18.292041 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Ensuring network mk-default-k8s-diff-port-711413 is active
	I0717 19:58:18.292477 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Getting domain xml...
	I0717 19:58:18.293393 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Creating domain...
	I0717 19:58:18.751368 1102136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:58:18.878097 1102136 docker.go:212] disabling docker service ...
	I0717 19:58:18.878186 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:58:18.895094 1102136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:58:18.909958 1102136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:58:19.032014 1102136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:58:19.141917 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:58:19.158474 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:58:19.178688 1102136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:58:19.178767 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.189949 1102136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:58:19.190059 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.201270 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.212458 1102136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:19.226193 1102136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:58:19.239919 1102136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:58:19.251627 1102136 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:58:19.251711 1102136 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:58:19.268984 1102136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:58:19.281898 1102136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:58:19.390523 1102136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:58:19.599827 1102136 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:58:19.599937 1102136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:58:19.605741 1102136 start.go:537] Will wait 60s for crictl version
	I0717 19:58:19.605810 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:19.610347 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:58:19.653305 1102136 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:58:19.653418 1102136 ssh_runner.go:195] Run: crio --version
	I0717 19:58:19.712418 1102136 ssh_runner.go:195] Run: crio --version
	I0717 19:58:19.773012 1102136 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:58:19.775099 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetIP
	I0717 19:58:19.778530 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:19.779127 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:58:19.779167 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:58:19.779477 1102136 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 19:58:19.784321 1102136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:19.797554 1102136 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:58:19.797682 1102136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:58:19.833548 1102136 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:58:19.833590 1102136 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:19.833749 1102136 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:19.833760 1102136 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:19.833787 1102136 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0717 19:58:19.833821 1102136 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:19.833722 1102136 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:19.835432 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:19.835461 1102136 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:19.835497 1102136 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:19.835492 1102136 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0717 19:58:19.835432 1102136 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:19.835432 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:19.835463 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:19.835436 1102136 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.032458 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.032526 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:20.035507 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:20.035509 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:20.041878 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:20.056915 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0717 19:58:20.099112 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:20.119661 1102136 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:20.195250 1102136 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0717 19:58:20.195338 1102136 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0717 19:58:20.195384 1102136 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:20.195441 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.195348 1102136 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.195521 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.212109 1102136 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0717 19:58:20.212185 1102136 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:20.212255 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.232021 1102136 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0717 19:58:20.232077 1102136 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:20.232126 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.232224 1102136 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0717 19:58:20.232257 1102136 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:20.232287 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.363363 1102136 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0717 19:58:20.363425 1102136 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:20.363470 1102136 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 19:58:20.363498 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0717 19:58:20.363529 1102136 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:20.363483 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.363579 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0717 19:58:20.363660 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0717 19:58:20.363569 1102136 ssh_runner.go:195] Run: which crictl
	I0717 19:58:20.363722 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0717 19:58:20.363783 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 19:58:20.368457 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0717 19:58:20.469461 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0717 19:58:20.469647 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 19:58:20.476546 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0717 19:58:20.476613 1102136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:58:20.476657 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0717 19:58:20.476703 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.7-0
	I0717 19:58:20.476751 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 19:58:20.476824 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0717 19:58:20.476918 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0717 19:58:20.483915 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0717 19:58:20.483949 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.27.3 (exists)
	I0717 19:58:20.483966 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 19:58:20.483970 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0717 19:58:20.484015 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 19:58:20.484030 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 19:58:20.484067 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 19:58:20.532090 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0717 19:58:20.532113 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.7-0 (exists)
	I0717 19:58:20.532194 1102136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 19:58:20.532213 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.27.3 (exists)
	I0717 19:58:20.532304 1102136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:58:19.668342 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting to get IP...
	I0717 19:58:19.669327 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.669868 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.669996 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:19.669860 1103407 retry.go:31] will retry after 270.908859ms: waiting for machine to come up
	I0717 19:58:19.942914 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.943490 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:19.943524 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:19.943434 1103407 retry.go:31] will retry after 387.572792ms: waiting for machine to come up
	I0717 19:58:20.333302 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.333904 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.333934 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:20.333842 1103407 retry.go:31] will retry after 325.807844ms: waiting for machine to come up
	I0717 19:58:20.661438 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.661890 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:20.661926 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:20.661828 1103407 retry.go:31] will retry after 492.482292ms: waiting for machine to come up
	I0717 19:58:21.155613 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.156184 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.156212 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:21.156089 1103407 retry.go:31] will retry after 756.388438ms: waiting for machine to come up
	I0717 19:58:21.914212 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.914770 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:21.914806 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:21.914695 1103407 retry.go:31] will retry after 754.504649ms: waiting for machine to come up
	I0717 19:58:22.670913 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:22.671334 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:22.671369 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:22.671278 1103407 retry.go:31] will retry after 790.272578ms: waiting for machine to come up
	I0717 19:58:23.463657 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:23.464118 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:23.464145 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:23.464042 1103407 retry.go:31] will retry after 1.267289365s: waiting for machine to come up
	I0717 19:58:23.707718 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3: (3.223672376s)
	I0717 19:58:23.707750 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 from cache
	I0717 19:58:23.707788 1102136 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0717 19:58:23.707804 1102136 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3: (3.223748615s)
	I0717 19:58:23.707842 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.27.3 (exists)
	I0717 19:58:23.707856 1102136 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3: (3.223769648s)
	I0717 19:58:23.707862 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0717 19:58:23.707878 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.27.3 (exists)
	I0717 19:58:23.707908 1102136 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.175586566s)
	I0717 19:58:23.707926 1102136 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 19:58:24.960652 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.252755334s)
	I0717 19:58:24.960691 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0717 19:58:24.960722 1102136 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.7-0
	I0717 19:58:24.960770 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0
	I0717 19:58:24.733590 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:24.734140 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:24.734176 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:24.734049 1103407 retry.go:31] will retry after 1.733875279s: waiting for machine to come up
	I0717 19:58:26.470148 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:26.470587 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:26.470640 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:26.470522 1103407 retry.go:31] will retry after 1.829632979s: waiting for machine to come up
	I0717 19:58:28.301973 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:28.302506 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:28.302560 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:28.302421 1103407 retry.go:31] will retry after 2.201530837s: waiting for machine to come up
	I0717 19:58:32.118558 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0: (7.157750323s)
	I0717 19:58:32.118606 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 from cache
	I0717 19:58:32.118641 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 19:58:32.118700 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 19:58:33.577369 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3: (1.458638516s)
	I0717 19:58:33.577400 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 from cache
	I0717 19:58:33.577447 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 19:58:33.577595 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 19:58:30.507029 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:30.507586 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:30.507647 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:30.507447 1103407 retry.go:31] will retry after 2.947068676s: waiting for machine to come up
	I0717 19:58:33.456714 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:33.457232 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | unable to find current IP address of domain default-k8s-diff-port-711413 in network mk-default-k8s-diff-port-711413
	I0717 19:58:33.457261 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | I0717 19:58:33.457148 1103407 retry.go:31] will retry after 3.074973516s: waiting for machine to come up
	I0717 19:58:37.871095 1103141 start.go:369] acquired machines lock for "embed-certs-114855" in 1m22.018672602s
	I0717 19:58:37.871161 1103141 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:58:37.871175 1103141 fix.go:54] fixHost starting: 
	I0717 19:58:37.871619 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:58:37.871689 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:58:37.889865 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46381
	I0717 19:58:37.890334 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:58:37.891044 1103141 main.go:141] libmachine: Using API Version  1
	I0717 19:58:37.891070 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:58:37.891471 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:58:37.891734 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:58:37.891927 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 19:58:37.893736 1103141 fix.go:102] recreateIfNeeded on embed-certs-114855: state=Stopped err=<nil>
	I0717 19:58:37.893779 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	W0717 19:58:37.893994 1103141 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:58:37.896545 1103141 out.go:177] * Restarting existing kvm2 VM for "embed-certs-114855" ...
	I0717 19:58:35.345141 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3: (1.767506173s)
	I0717 19:58:35.345180 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 from cache
	I0717 19:58:35.345211 1102136 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 19:58:35.345273 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 19:58:37.803066 1102136 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3: (2.457743173s)
	I0717 19:58:37.803106 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 from cache
	I0717 19:58:37.803139 1102136 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:58:37.803193 1102136 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:58:38.559165 1102136 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 19:58:38.559222 1102136 cache_images.go:123] Successfully loaded all cached images
	I0717 19:58:38.559231 1102136 cache_images.go:92] LoadImages completed in 18.725611601s
	I0717 19:58:38.559363 1102136 ssh_runner.go:195] Run: crio config
	I0717 19:58:38.630364 1102136 cni.go:84] Creating CNI manager for ""
	I0717 19:58:38.630394 1102136 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:58:38.630421 1102136 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:58:38.630447 1102136 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.65 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-408472 NodeName:no-preload-408472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:58:38.630640 1102136 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-408472"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:58:38.630739 1102136 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-408472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:no-preload-408472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:58:38.630813 1102136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:58:38.643343 1102136 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:58:38.643443 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:58:38.653495 1102136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0717 19:58:36.535628 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.536224 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Found IP for machine: 192.168.72.51
	I0717 19:58:36.536256 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Reserving static IP address...
	I0717 19:58:36.536278 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has current primary IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.536720 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-711413", mac: "52:54:00:7d:d7:a9", ip: "192.168.72.51"} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.536756 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | skip adding static IP to network mk-default-k8s-diff-port-711413 - found existing host DHCP lease matching {name: "default-k8s-diff-port-711413", mac: "52:54:00:7d:d7:a9", ip: "192.168.72.51"}
	I0717 19:58:36.536773 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Reserved static IP address: 192.168.72.51
	I0717 19:58:36.536791 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Waiting for SSH to be available...
	I0717 19:58:36.536804 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Getting to WaitForSSH function...
	I0717 19:58:36.540038 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.540593 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.540649 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.540764 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Using SSH client type: external
	I0717 19:58:36.540799 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa (-rw-------)
	I0717 19:58:36.540855 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:58:36.540876 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | About to run SSH command:
	I0717 19:58:36.540895 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | exit 0
	I0717 19:58:36.637774 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | SSH cmd err, output: <nil>: 
	I0717 19:58:36.638200 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetConfigRaw
	I0717 19:58:36.638931 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:36.642048 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.642530 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.642560 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.642850 1102415 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/config.json ...
	I0717 19:58:36.643061 1102415 machine.go:88] provisioning docker machine ...
	I0717 19:58:36.643080 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:36.643344 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetMachineName
	I0717 19:58:36.643516 1102415 buildroot.go:166] provisioning hostname "default-k8s-diff-port-711413"
	I0717 19:58:36.643535 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetMachineName
	I0717 19:58:36.643766 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:36.646810 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.647205 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.647243 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.647582 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:36.647826 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.648082 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.648275 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:36.648470 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:36.648883 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:36.648898 1102415 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-711413 && echo "default-k8s-diff-port-711413" | sudo tee /etc/hostname
	I0717 19:58:36.784478 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-711413
	
	I0717 19:58:36.784524 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:36.787641 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.788065 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.788118 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.788363 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:36.788605 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.788799 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:36.788942 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:36.789239 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:36.789869 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:36.789916 1102415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-711413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-711413/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-711413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:58:36.923177 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:58:36.923211 1102415 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:58:36.923237 1102415 buildroot.go:174] setting up certificates
	I0717 19:58:36.923248 1102415 provision.go:83] configureAuth start
	I0717 19:58:36.923257 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetMachineName
	I0717 19:58:36.923633 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:36.927049 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.927406 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.927443 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.927641 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:36.930158 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.930705 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:36.930771 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:36.930844 1102415 provision.go:138] copyHostCerts
	I0717 19:58:36.930969 1102415 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:58:36.930984 1102415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:58:36.931064 1102415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:58:36.931188 1102415 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:58:36.931201 1102415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:58:36.931235 1102415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:58:36.931315 1102415 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:58:36.931325 1102415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:58:36.931353 1102415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:58:36.931423 1102415 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-711413 san=[192.168.72.51 192.168.72.51 localhost 127.0.0.1 minikube default-k8s-diff-port-711413]
	I0717 19:58:37.043340 1102415 provision.go:172] copyRemoteCerts
	I0717 19:58:37.043444 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:58:37.043487 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.047280 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.047842 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.047879 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.048143 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.048410 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.048648 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.048844 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:58:37.147255 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:58:37.175437 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 19:58:37.202827 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:58:37.231780 1102415 provision.go:86] duration metric: configureAuth took 308.515103ms
	I0717 19:58:37.231818 1102415 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:58:37.232118 1102415 config.go:182] Loaded profile config "default-k8s-diff-port-711413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:58:37.232255 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.235364 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.235964 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.236011 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.236296 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.236533 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.236793 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.236976 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.237175 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:37.237831 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:37.237866 1102415 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:58:37.601591 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:58:37.601631 1102415 machine.go:91] provisioned docker machine in 958.556319ms
	I0717 19:58:37.601644 1102415 start.go:300] post-start starting for "default-k8s-diff-port-711413" (driver="kvm2")
	I0717 19:58:37.601665 1102415 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:58:37.601692 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.602018 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:58:37.602048 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.604964 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.605335 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.605387 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.605486 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.605822 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.606033 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.606224 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:58:37.696316 1102415 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:58:37.701409 1102415 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:58:37.701442 1102415 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:58:37.701579 1102415 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:58:37.701694 1102415 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:58:37.701827 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:58:37.711545 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:37.739525 1102415 start.go:303] post-start completed in 137.838589ms
	I0717 19:58:37.739566 1102415 fix.go:56] fixHost completed within 19.476203721s
	I0717 19:58:37.739599 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.742744 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.743095 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.743127 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.743298 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.743568 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.743768 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.743929 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.744164 1102415 main.go:141] libmachine: Using SSH client type: native
	I0717 19:58:37.744786 1102415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.51 22 <nil> <nil>}
	I0717 19:58:37.744809 1102415 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:58:37.870894 1102415 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623917.842259641
	
	I0717 19:58:37.870923 1102415 fix.go:206] guest clock: 1689623917.842259641
	I0717 19:58:37.870931 1102415 fix.go:219] Guest: 2023-07-17 19:58:37.842259641 +0000 UTC Remote: 2023-07-17 19:58:37.739572977 +0000 UTC m=+273.789942316 (delta=102.686664ms)
	I0717 19:58:37.870992 1102415 fix.go:190] guest clock delta is within tolerance: 102.686664ms
	I0717 19:58:37.871004 1102415 start.go:83] releasing machines lock for "default-k8s-diff-port-711413", held for 19.607687828s
	I0717 19:58:37.871044 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.871350 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:37.874527 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.874967 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.875029 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.875202 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.875791 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.876007 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:58:37.876141 1102415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:58:37.876211 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.876261 1102415 ssh_runner.go:195] Run: cat /version.json
	I0717 19:58:37.876289 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:58:37.879243 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.879483 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.879717 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.879752 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.879861 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.880090 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:37.880098 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.880118 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:37.880204 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:58:37.880335 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.880427 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:58:37.880513 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:58:37.880582 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:58:37.880714 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	W0717 19:58:37.967909 1102415 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:58:37.968017 1102415 ssh_runner.go:195] Run: systemctl --version
	I0717 19:58:37.997996 1102415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:58:38.148654 1102415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:58:38.156049 1102415 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:58:38.156151 1102415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:58:38.177835 1102415 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:58:38.177866 1102415 start.go:469] detecting cgroup driver to use...
	I0717 19:58:38.177945 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:58:38.196359 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:58:38.209697 1102415 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:58:38.209777 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:58:38.226250 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:58:38.244868 1102415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:58:38.385454 1102415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:58:38.527891 1102415 docker.go:212] disabling docker service ...
	I0717 19:58:38.527973 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:58:38.546083 1102415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:58:38.562767 1102415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:58:38.702706 1102415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:58:38.828923 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:58:38.845137 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:58:38.866427 1102415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:58:38.866511 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.878067 1102415 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:58:38.878157 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.892494 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.905822 1102415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:58:38.917786 1102415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:58:38.931418 1102415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:58:38.945972 1102415 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:58:38.946039 1102415 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:58:38.964498 1102415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:58:38.977323 1102415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:58:39.098593 1102415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:58:39.320821 1102415 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:58:39.320909 1102415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:58:39.327195 1102415 start.go:537] Will wait 60s for crictl version
	I0717 19:58:39.327285 1102415 ssh_runner.go:195] Run: which crictl
	I0717 19:58:39.333466 1102415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:58:39.372542 1102415 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:58:39.372643 1102415 ssh_runner.go:195] Run: crio --version
	I0717 19:58:39.419356 1102415 ssh_runner.go:195] Run: crio --version
	I0717 19:58:39.467405 1102415 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:58:37.898938 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Start
	I0717 19:58:37.899185 1103141 main.go:141] libmachine: (embed-certs-114855) Ensuring networks are active...
	I0717 19:58:37.900229 1103141 main.go:141] libmachine: (embed-certs-114855) Ensuring network default is active
	I0717 19:58:37.900690 1103141 main.go:141] libmachine: (embed-certs-114855) Ensuring network mk-embed-certs-114855 is active
	I0717 19:58:37.901444 1103141 main.go:141] libmachine: (embed-certs-114855) Getting domain xml...
	I0717 19:58:37.902311 1103141 main.go:141] libmachine: (embed-certs-114855) Creating domain...
	I0717 19:58:39.293109 1103141 main.go:141] libmachine: (embed-certs-114855) Waiting to get IP...
	I0717 19:58:39.294286 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:39.294784 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:39.294877 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:39.294761 1103558 retry.go:31] will retry after 201.93591ms: waiting for machine to come up
	I0717 19:58:39.498428 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:39.499066 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:39.499123 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:39.498979 1103558 retry.go:31] will retry after 321.702493ms: waiting for machine to come up
	I0717 19:58:39.822708 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:39.823258 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:39.823287 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:39.823212 1103558 retry.go:31] will retry after 477.114259ms: waiting for machine to come up
	I0717 19:58:40.302080 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:40.302727 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:40.302755 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:40.302668 1103558 retry.go:31] will retry after 554.321931ms: waiting for machine to come up
	I0717 19:58:38.674825 1102136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:58:38.697168 1102136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0717 19:58:38.719030 1102136 ssh_runner.go:195] Run: grep 192.168.61.65	control-plane.minikube.internal$ /etc/hosts
	I0717 19:58:38.724312 1102136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:38.742726 1102136 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472 for IP: 192.168.61.65
	I0717 19:58:38.742830 1102136 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:58:38.743029 1102136 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:58:38.743082 1102136 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:58:38.743238 1102136 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.key
	I0717 19:58:38.743316 1102136 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/apiserver.key.71349e66
	I0717 19:58:38.743370 1102136 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/proxy-client.key
	I0717 19:58:38.743527 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:58:38.743579 1102136 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:58:38.743597 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:58:38.743631 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:58:38.743667 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:58:38.743699 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:58:38.743759 1102136 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:38.744668 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:58:38.773602 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:58:38.799675 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:58:38.826050 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:58:38.856973 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:58:38.886610 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:58:38.916475 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:58:38.945986 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:58:38.973415 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:58:39.002193 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:58:39.030265 1102136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:58:39.062896 1102136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:58:39.082877 1102136 ssh_runner.go:195] Run: openssl version
	I0717 19:58:39.090088 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:58:39.104372 1102136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:58:39.110934 1102136 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:58:39.111023 1102136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:58:39.117702 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:58:39.132094 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:58:39.149143 1102136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:58:39.155238 1102136 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:58:39.155359 1102136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:58:39.164149 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:58:39.178830 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:58:39.192868 1102136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:39.199561 1102136 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:39.199663 1102136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:39.208054 1102136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:58:39.220203 1102136 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:58:39.228030 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:58:39.235220 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:58:39.243450 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:58:39.250709 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:58:39.260912 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:58:39.269318 1102136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:58:39.277511 1102136 kubeadm.go:404] StartCluster: {Name:no-preload-408472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-408472 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.65 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:58:39.277701 1102136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:58:39.277789 1102136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:39.317225 1102136 cri.go:89] found id: ""
	I0717 19:58:39.317321 1102136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:58:39.330240 1102136 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:58:39.330274 1102136 kubeadm.go:636] restartCluster start
	I0717 19:58:39.330351 1102136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:58:39.343994 1102136 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:39.345762 1102136 kubeconfig.go:92] found "no-preload-408472" server: "https://192.168.61.65:8443"
	I0717 19:58:39.350027 1102136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:58:39.360965 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:39.361039 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:39.375103 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:39.875778 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:39.875891 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:39.892869 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:40.375344 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:40.375421 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:40.392992 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:40.875474 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:40.875590 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:40.892666 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:41.375224 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:41.375335 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:41.393833 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:41.875377 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:41.875466 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:41.893226 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:42.375846 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:42.375957 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:42.390397 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:42.876105 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:42.876220 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:42.889082 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:43.375654 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:43.375774 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:43.392598 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:39.469543 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetIP
	I0717 19:58:39.472792 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:39.473333 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:58:39.473386 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:58:39.473640 1102415 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:58:39.478276 1102415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:39.491427 1102415 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:58:39.491514 1102415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:58:39.527759 1102415 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:58:39.527856 1102415 ssh_runner.go:195] Run: which lz4
	I0717 19:58:39.532935 1102415 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:58:39.537733 1102415 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:58:39.537785 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 19:58:41.480847 1102415 crio.go:444] Took 1.947975 seconds to copy over tarball
	I0717 19:58:41.480932 1102415 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:58:40.858380 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:40.858925 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:40.858970 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:40.858865 1103558 retry.go:31] will retry after 616.432145ms: waiting for machine to come up
	I0717 19:58:41.476868 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:41.477399 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:41.477434 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:41.477348 1103558 retry.go:31] will retry after 780.737319ms: waiting for machine to come up
	I0717 19:58:42.259853 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:42.260278 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:42.260310 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:42.260216 1103558 retry.go:31] will retry after 858.918849ms: waiting for machine to come up
	I0717 19:58:43.120599 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:43.121211 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:43.121247 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:43.121155 1103558 retry.go:31] will retry after 1.359881947s: waiting for machine to come up
	I0717 19:58:44.482733 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:44.483173 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:44.483203 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:44.483095 1103558 retry.go:31] will retry after 1.298020016s: waiting for machine to come up
	I0717 19:58:43.875260 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:43.875367 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:43.892010 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:44.376275 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:44.376378 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:44.394725 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:44.875258 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:44.875377 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:44.890500 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.376203 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.376337 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.392119 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.875466 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.875573 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.888488 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.376141 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.376288 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.391072 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.875635 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.875797 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.895087 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.375551 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.375653 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.392620 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.875197 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.875340 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.887934 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:48.375469 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.375588 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.392548 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:44.570404 1102415 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.089433908s)
	I0717 19:58:44.570451 1102415 crio.go:451] Took 3.089562 seconds to extract the tarball
	I0717 19:58:44.570465 1102415 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:58:44.615062 1102415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:58:44.660353 1102415 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:58:44.660385 1102415 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:58:44.660468 1102415 ssh_runner.go:195] Run: crio config
	I0717 19:58:44.726880 1102415 cni.go:84] Creating CNI manager for ""
	I0717 19:58:44.726915 1102415 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:58:44.726946 1102415 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:58:44.726973 1102415 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.51 APIServerPort:8444 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-711413 NodeName:default-k8s-diff-port-711413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:58:44.727207 1102415 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.51
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-711413"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:58:44.727340 1102415 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-711413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-711413 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0717 19:58:44.727430 1102415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:58:44.740398 1102415 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:58:44.740509 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:58:44.751288 1102415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0717 19:58:44.769779 1102415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:58:44.788216 1102415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0717 19:58:44.808085 1102415 ssh_runner.go:195] Run: grep 192.168.72.51	control-plane.minikube.internal$ /etc/hosts
	I0717 19:58:44.812829 1102415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:58:44.826074 1102415 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413 for IP: 192.168.72.51
	I0717 19:58:44.826123 1102415 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:58:44.826373 1102415 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:58:44.826440 1102415 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:58:44.826543 1102415 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.key
	I0717 19:58:44.826629 1102415 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/apiserver.key.f6db28d6
	I0717 19:58:44.826697 1102415 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/proxy-client.key
	I0717 19:58:44.826855 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:58:44.826902 1102415 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:58:44.826915 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:58:44.826953 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:58:44.826988 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:58:44.827026 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:58:44.827091 1102415 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:58:44.828031 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:58:44.856357 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:58:44.884042 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:58:44.915279 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:58:44.945170 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:58:44.974151 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:58:45.000387 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:58:45.027617 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:58:45.054305 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:58:45.080828 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:58:45.107437 1102415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:58:45.135588 1102415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:58:45.155297 1102415 ssh_runner.go:195] Run: openssl version
	I0717 19:58:45.162096 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:58:45.175077 1102415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:58:45.180966 1102415 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:58:45.181050 1102415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:58:45.187351 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:58:45.199795 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:58:45.214273 1102415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:45.220184 1102415 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:45.220269 1102415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:58:45.227207 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:58:45.239921 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:58:45.252978 1102415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:58:45.259164 1102415 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:58:45.259257 1102415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:58:45.266134 1102415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:58:45.281302 1102415 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:58:45.287179 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:58:45.294860 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:58:45.302336 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:58:45.309621 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:58:45.316590 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:58:45.323564 1102415 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:58:45.330904 1102415 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-711413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port
-711413 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.51 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:58:45.331050 1102415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:58:45.331115 1102415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:45.368522 1102415 cri.go:89] found id: ""
	I0717 19:58:45.368606 1102415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:58:45.380610 1102415 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:58:45.380640 1102415 kubeadm.go:636] restartCluster start
	I0717 19:58:45.380711 1102415 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:58:45.391395 1102415 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.392845 1102415 kubeconfig.go:92] found "default-k8s-diff-port-711413" server: "https://192.168.72.51:8444"
	I0717 19:58:45.395718 1102415 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:58:45.405869 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.405954 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.417987 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.918789 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:45.918924 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:45.935620 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.418786 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.418918 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.435879 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:46.918441 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:46.918570 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:46.934753 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.418315 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.418429 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.434411 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:47.918984 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:47.919143 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:47.930556 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:48.418827 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.418915 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.430779 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:48.918288 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.918395 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.929830 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:45.782651 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:45.853667 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:45.853691 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:45.783094 1103558 retry.go:31] will retry after 2.002921571s: waiting for machine to come up
	I0717 19:58:47.788455 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:47.788965 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:47.788995 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:47.788914 1103558 retry.go:31] will retry after 2.108533646s: waiting for machine to come up
	I0717 19:58:49.899541 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:49.900028 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:49.900073 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:49.899974 1103558 retry.go:31] will retry after 3.529635748s: waiting for machine to come up
	I0717 19:58:48.875708 1102136 api_server.go:166] Checking apiserver status ...
	I0717 19:58:48.875803 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:48.893686 1102136 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:49.362030 1102136 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:58:49.362079 1102136 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:58:49.362096 1102136 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:58:49.362166 1102136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:49.405900 1102136 cri.go:89] found id: ""
	I0717 19:58:49.405997 1102136 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:58:49.429666 1102136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:58:49.440867 1102136 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:58:49.440938 1102136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:49.454993 1102136 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:49.455023 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:49.606548 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.568083 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.782373 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.895178 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:50.999236 1102136 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:58:50.999321 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:51.519969 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:52.019769 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:52.519618 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:53.020330 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:53.519378 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:53.549727 1102136 api_server.go:72] duration metric: took 2.550491567s to wait for apiserver process to appear ...
	I0717 19:58:53.549757 1102136 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:58:53.549778 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:49.418724 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:49.418839 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:49.431867 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:49.918433 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:49.918602 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:49.933324 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:50.418991 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:50.419113 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:50.433912 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:50.919128 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:50.919228 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:50.934905 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:51.418418 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:51.418557 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:51.430640 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:51.918136 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:51.918248 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:51.933751 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:52.418277 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:52.418388 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:52.434907 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:52.918570 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:52.918702 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:52.933426 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:53.418734 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:53.418828 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:53.431710 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:53.918381 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:53.918502 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:53.930053 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:53.431544 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:53.432055 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:53.432087 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:53.431995 1103558 retry.go:31] will retry after 3.133721334s: waiting for machine to come up
	I0717 19:58:57.990532 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:58:57.990581 1102136 api_server.go:103] status: https://192.168.61.65:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:58:58.491387 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:58.501594 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:58:58.501636 1102136 api_server.go:103] status: https://192.168.61.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:58:54.418156 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:54.418290 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:54.430262 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:54.918831 1102415 api_server.go:166] Checking apiserver status ...
	I0717 19:58:54.918933 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:58:54.930380 1102415 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:58:55.406385 1102415 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:58:55.406432 1102415 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:58:55.406451 1102415 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:58:55.406530 1102415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:58:55.444364 1102415 cri.go:89] found id: ""
	I0717 19:58:55.444447 1102415 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:58:55.460367 1102415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:58:55.472374 1102415 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:58:55.472469 1102415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:55.482078 1102415 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:58:55.482121 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:55.630428 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.221310 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.460424 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.570707 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:56.691954 1102415 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:58:56.692053 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:57.209115 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:57.708801 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:58.209204 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:58.709268 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:58.991630 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:58.999253 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:58:58.999295 1102136 api_server.go:103] status: https://192.168.61.65:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:58:59.491062 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 19:58:59.498441 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 200:
	ok
	I0717 19:58:59.514314 1102136 api_server.go:141] control plane version: v1.27.3
	I0717 19:58:59.514353 1102136 api_server.go:131] duration metric: took 5.964587051s to wait for apiserver health ...
	I0717 19:58:59.514368 1102136 cni.go:84] Creating CNI manager for ""
	I0717 19:58:59.514403 1102136 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:58:59.516809 1102136 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:58:56.567585 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:58:56.568167 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | unable to find current IP address of domain embed-certs-114855 in network mk-embed-certs-114855
	I0717 19:58:56.568203 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | I0717 19:58:56.568069 1103558 retry.go:31] will retry after 4.627498539s: waiting for machine to come up
	I0717 19:58:59.518908 1102136 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:58:59.549246 1102136 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:58:59.598652 1102136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:58:59.614418 1102136 system_pods.go:59] 8 kube-system pods found
	I0717 19:58:59.614482 1102136 system_pods.go:61] "coredns-5d78c9869d-9mxdj" [fbff09fd-436d-4208-9187-b6312aa1c223] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:58:59.614506 1102136 system_pods.go:61] "etcd-no-preload-408472" [7125f19d-c1ed-4b1f-99be-207bbf5d8c70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:58:59.614519 1102136 system_pods.go:61] "kube-apiserver-no-preload-408472" [0e54eaed-daad-434a-ad3b-96f7fb924099] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:58:59.614529 1102136 system_pods.go:61] "kube-controller-manager-no-preload-408472" [38ee8079-8142-45a9-8d5a-4abbc5c8bb3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:58:59.614547 1102136 system_pods.go:61] "kube-proxy-cntdn" [8653567b-abf9-468c-a030-45fc53fa0cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 19:58:59.614558 1102136 system_pods.go:61] "kube-scheduler-no-preload-408472" [e51560a1-c1b0-407c-8635-df512bd033b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:58:59.614575 1102136 system_pods.go:61] "metrics-server-74d5c6b9c-hnngh" [dfff837e-dbba-4795-935d-9562d2744169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:58:59.614637 1102136 system_pods.go:61] "storage-provisioner" [1aefd8ef-dec9-4e37-8648-8e5a62622cd3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 19:58:59.614658 1102136 system_pods.go:74] duration metric: took 15.975122ms to wait for pod list to return data ...
	I0717 19:58:59.614669 1102136 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:58:59.621132 1102136 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:58:59.621181 1102136 node_conditions.go:123] node cpu capacity is 2
	I0717 19:58:59.621197 1102136 node_conditions.go:105] duration metric: took 6.519635ms to run NodePressure ...
	I0717 19:58:59.621224 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:58:59.909662 1102136 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:58:59.915153 1102136 kubeadm.go:787] kubelet initialised
	I0717 19:58:59.915190 1102136 kubeadm.go:788] duration metric: took 5.491139ms waiting for restarted kubelet to initialise ...
	I0717 19:58:59.915201 1102136 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:58:59.925196 1102136 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	I0717 19:58:59.934681 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.934715 1102136 pod_ready.go:81] duration metric: took 9.478384ms waiting for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	E0717 19:58:59.934728 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.934742 1102136 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:58:59.949704 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "etcd-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.949744 1102136 pod_ready.go:81] duration metric: took 14.992167ms waiting for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:58:59.949757 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "etcd-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.949766 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:58:59.958029 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-apiserver-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.958083 1102136 pod_ready.go:81] duration metric: took 8.306713ms waiting for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:58:59.958096 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-apiserver-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:58:59.958110 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:00.003638 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.003689 1102136 pod_ready.go:81] duration metric: took 45.565817ms waiting for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:00.003702 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.003714 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:00.403384 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-proxy-cntdn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.403421 1102136 pod_ready.go:81] duration metric: took 399.697327ms waiting for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:00.403431 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-proxy-cntdn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.403440 1102136 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:00.803159 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "kube-scheduler-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.803192 1102136 pod_ready.go:81] duration metric: took 399.744356ms waiting for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:00.803205 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "kube-scheduler-no-preload-408472" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:00.803217 1102136 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:01.206222 1102136 pod_ready.go:97] node "no-preload-408472" hosting pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:01.206247 1102136 pod_ready.go:81] duration metric: took 403.0216ms waiting for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:01.206256 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-408472" hosting pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:01.206271 1102136 pod_ready.go:38] duration metric: took 1.291054316s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:01.206293 1102136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:59:01.225481 1102136 ops.go:34] apiserver oom_adj: -16
	I0717 19:59:01.225516 1102136 kubeadm.go:640] restartCluster took 21.895234291s
	I0717 19:59:01.225528 1102136 kubeadm.go:406] StartCluster complete in 21.948029137s
	I0717 19:59:01.225551 1102136 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:01.225672 1102136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:59:01.228531 1102136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:01.228913 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:59:01.229088 1102136 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:59:01.229192 1102136 config.go:182] Loaded profile config "no-preload-408472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:59:01.229244 1102136 addons.go:69] Setting metrics-server=true in profile "no-preload-408472"
	I0717 19:59:01.229249 1102136 addons.go:69] Setting default-storageclass=true in profile "no-preload-408472"
	I0717 19:59:01.229280 1102136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-408472"
	I0717 19:59:01.229299 1102136 addons.go:231] Setting addon metrics-server=true in "no-preload-408472"
	W0717 19:59:01.229307 1102136 addons.go:240] addon metrics-server should already be in state true
	I0717 19:59:01.229241 1102136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-408472"
	I0717 19:59:01.229353 1102136 addons.go:231] Setting addon storage-provisioner=true in "no-preload-408472"
	W0717 19:59:01.229366 1102136 addons.go:240] addon storage-provisioner should already be in state true
	I0717 19:59:01.229440 1102136 host.go:66] Checking if "no-preload-408472" exists ...
	I0717 19:59:01.229447 1102136 host.go:66] Checking if "no-preload-408472" exists ...
	I0717 19:59:01.229764 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.229804 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.229833 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.229854 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.229897 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.229943 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.235540 1102136 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-408472" context rescaled to 1 replicas
	I0717 19:59:01.235641 1102136 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.65 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:59:01.239320 1102136 out.go:177] * Verifying Kubernetes components...
	I0717 19:59:01.241167 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:59:01.247222 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0717 19:59:01.247751 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.248409 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.248438 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.248825 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.249141 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.249823 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
	I0717 19:59:01.249829 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34569
	I0717 19:59:01.250716 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.250724 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.251383 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.251409 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.251591 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.251612 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.252011 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.252078 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.252646 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.252679 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.252688 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.252700 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.270584 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I0717 19:59:01.270664 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40173
	I0717 19:59:01.271057 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.271170 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.271634 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.271656 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.271782 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.271807 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.272018 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.272158 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.272237 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.272362 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.274521 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:59:01.274525 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:59:01.277458 1102136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:59:01.279611 1102136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:02.603147 1101908 start.go:369] acquired machines lock for "old-k8s-version-149000" in 1m3.679538618s
	I0717 19:59:02.603207 1101908 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:59:02.603219 1101908 fix.go:54] fixHost starting: 
	I0717 19:59:02.603691 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:02.603736 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:02.625522 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46275
	I0717 19:59:02.626230 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:02.626836 1101908 main.go:141] libmachine: Using API Version  1
	I0717 19:59:02.626876 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:02.627223 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:02.627395 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:02.627513 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 19:59:02.629627 1101908 fix.go:102] recreateIfNeeded on old-k8s-version-149000: state=Stopped err=<nil>
	I0717 19:59:02.629669 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	W0717 19:59:02.629894 1101908 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:59:02.632584 1101908 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-149000" ...
	I0717 19:59:01.279643 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:59:01.281507 1102136 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:01.281513 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:59:01.281520 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:59:01.281545 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:59:01.281545 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:59:01.286403 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.286708 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.286766 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:59:01.286801 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.287001 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:59:01.287264 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:59:01.287523 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:59:01.287525 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:59:01.287606 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.287736 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:59:01.287791 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:59:01.288610 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:59:01.288821 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:59:01.288982 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:59:01.291242 1102136 addons.go:231] Setting addon default-storageclass=true in "no-preload-408472"
	W0717 19:59:01.291259 1102136 addons.go:240] addon default-storageclass should already be in state true
	I0717 19:59:01.291287 1102136 host.go:66] Checking if "no-preload-408472" exists ...
	I0717 19:59:01.291542 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.291569 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.309690 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43629
	I0717 19:59:01.310234 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.310915 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.310944 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.311356 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.311903 1102136 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:01.311953 1102136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:01.350859 1102136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0717 19:59:01.351342 1102136 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:01.351922 1102136 main.go:141] libmachine: Using API Version  1
	I0717 19:59:01.351950 1102136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:01.352334 1102136 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:01.352512 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetState
	I0717 19:59:01.354421 1102136 main.go:141] libmachine: (no-preload-408472) Calling .DriverName
	I0717 19:59:01.354815 1102136 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:01.354832 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:59:01.354853 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHHostname
	I0717 19:59:01.358180 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.358632 1102136 main.go:141] libmachine: (no-preload-408472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:75:ac", ip: ""} in network mk-no-preload-408472: {Iface:virbr4 ExpiryTime:2023-07-17 20:58:06 +0000 UTC Type:0 Mac:52:54:00:36:75:ac Iaid: IPaddr:192.168.61.65 Prefix:24 Hostname:no-preload-408472 Clientid:01:52:54:00:36:75:ac}
	I0717 19:59:01.358651 1102136 main.go:141] libmachine: (no-preload-408472) DBG | domain no-preload-408472 has defined IP address 192.168.61.65 and MAC address 52:54:00:36:75:ac in network mk-no-preload-408472
	I0717 19:59:01.358833 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHPort
	I0717 19:59:01.359049 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHKeyPath
	I0717 19:59:01.359435 1102136 main.go:141] libmachine: (no-preload-408472) Calling .GetSSHUsername
	I0717 19:59:01.359582 1102136 sshutil.go:53] new ssh client: &{IP:192.168.61.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/no-preload-408472/id_rsa Username:docker}
	I0717 19:59:01.510575 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:59:01.510598 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:59:01.534331 1102136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:01.545224 1102136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:01.582904 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:59:01.582945 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:59:01.645312 1102136 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:01.645342 1102136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:59:01.715240 1102136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:01.746252 1102136 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:59:01.746249 1102136 node_ready.go:35] waiting up to 6m0s for node "no-preload-408472" to be "Ready" ...
	I0717 19:58:59.208473 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:58:59.241367 1102415 api_server.go:72] duration metric: took 2.549409381s to wait for apiserver process to appear ...
	I0717 19:58:59.241403 1102415 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:58:59.241432 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:03.909722 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:03.909763 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:03.702857 1102136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.168474279s)
	I0717 19:59:03.702921 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:03.702938 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:03.703307 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:03.703331 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:03.703343 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:03.703353 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:03.703705 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:03.703735 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:03.703753 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:03.703766 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:03.705061 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Closing plugin on server side
	I0717 19:59:03.705164 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:03.705187 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:03.793171 1102136 node_ready.go:58] node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:04.294821 1102136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.749544143s)
	I0717 19:59:04.294904 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.294922 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.295362 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.295380 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.295391 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.295403 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.295470 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Closing plugin on server side
	I0717 19:59:04.295674 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.295703 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.349340 1102136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.634046821s)
	I0717 19:59:04.349410 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.349428 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.349817 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.349837 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.349848 1102136 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:04.349858 1102136 main.go:141] libmachine: (no-preload-408472) Calling .Close
	I0717 19:59:04.349864 1102136 main.go:141] libmachine: (no-preload-408472) DBG | Closing plugin on server side
	I0717 19:59:04.350081 1102136 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:04.350097 1102136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:04.350116 1102136 addons.go:467] Verifying addon metrics-server=true in "no-preload-408472"
	I0717 19:59:04.353040 1102136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 19:59:01.198818 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.199367 1103141 main.go:141] libmachine: (embed-certs-114855) Found IP for machine: 192.168.39.213
	I0717 19:59:01.199394 1103141 main.go:141] libmachine: (embed-certs-114855) Reserving static IP address...
	I0717 19:59:01.199415 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has current primary IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.199879 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "embed-certs-114855", mac: "52:54:00:d6:57:9a", ip: "192.168.39.213"} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.199916 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | skip adding static IP to network mk-embed-certs-114855 - found existing host DHCP lease matching {name: "embed-certs-114855", mac: "52:54:00:d6:57:9a", ip: "192.168.39.213"}
	I0717 19:59:01.199934 1103141 main.go:141] libmachine: (embed-certs-114855) Reserved static IP address: 192.168.39.213
	I0717 19:59:01.199952 1103141 main.go:141] libmachine: (embed-certs-114855) Waiting for SSH to be available...
	I0717 19:59:01.199960 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Getting to WaitForSSH function...
	I0717 19:59:01.202401 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.202876 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.202910 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.203075 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Using SSH client type: external
	I0717 19:59:01.203121 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa (-rw-------)
	I0717 19:59:01.203172 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:59:01.203195 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | About to run SSH command:
	I0717 19:59:01.203208 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | exit 0
	I0717 19:59:01.298366 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | SSH cmd err, output: <nil>: 
	I0717 19:59:01.298876 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetConfigRaw
	I0717 19:59:01.299753 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:01.303356 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.304237 1103141 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/config.json ...
	I0717 19:59:01.304526 1103141 machine.go:88] provisioning docker machine ...
	I0717 19:59:01.304569 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:01.304668 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.304694 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.304847 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetMachineName
	I0717 19:59:01.305079 1103141 buildroot.go:166] provisioning hostname "embed-certs-114855"
	I0717 19:59:01.305103 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetMachineName
	I0717 19:59:01.305324 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.308214 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.308591 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.308630 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.308805 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.309016 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.309195 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.309346 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.309591 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:01.310205 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:01.310227 1103141 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-114855 && echo "embed-certs-114855" | sudo tee /etc/hostname
	I0717 19:59:01.453113 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-114855
	
	I0717 19:59:01.453149 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.456502 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.456918 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.456981 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.457107 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.457291 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.457514 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.457711 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.457923 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:01.458567 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:01.458597 1103141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-114855' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-114855/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-114855' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:59:01.599062 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:59:01.599112 1103141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:59:01.599143 1103141 buildroot.go:174] setting up certificates
	I0717 19:59:01.599161 1103141 provision.go:83] configureAuth start
	I0717 19:59:01.599194 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetMachineName
	I0717 19:59:01.599579 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:01.602649 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.603014 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.603050 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.603218 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.606042 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.606485 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.606531 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.606679 1103141 provision.go:138] copyHostCerts
	I0717 19:59:01.606754 1103141 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:59:01.606767 1103141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:59:01.606839 1103141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:59:01.607009 1103141 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:59:01.607025 1103141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:59:01.607061 1103141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:59:01.607158 1103141 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:59:01.607174 1103141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:59:01.607204 1103141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:59:01.607298 1103141 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.embed-certs-114855 san=[192.168.39.213 192.168.39.213 localhost 127.0.0.1 minikube embed-certs-114855]
	I0717 19:59:01.721082 1103141 provision.go:172] copyRemoteCerts
	I0717 19:59:01.721179 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:59:01.721223 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.724636 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.725093 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.725127 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.725418 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.725708 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.725896 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.726056 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 19:59:01.826710 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 19:59:01.861153 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:59:01.889779 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:59:01.919893 1103141 provision.go:86] duration metric: configureAuth took 320.712718ms
	I0717 19:59:01.919929 1103141 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:59:01.920192 1103141 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:59:01.920283 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:01.923585 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.926174 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:01.926264 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:01.926897 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:01.927167 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.927365 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:01.927512 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:01.927712 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:01.928326 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:01.928350 1103141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:59:02.302372 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:59:02.302427 1103141 machine.go:91] provisioned docker machine in 997.853301ms
	I0717 19:59:02.302441 1103141 start.go:300] post-start starting for "embed-certs-114855" (driver="kvm2")
	I0717 19:59:02.302455 1103141 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:59:02.302487 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.302859 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:59:02.302900 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.305978 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.306544 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.306626 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.306769 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.306996 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.307231 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.307403 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 19:59:02.408835 1103141 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:59:02.415119 1103141 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:59:02.415157 1103141 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:59:02.415256 1103141 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:59:02.415444 1103141 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:59:02.415570 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:59:02.430800 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:02.465311 1103141 start.go:303] post-start completed in 162.851156ms
	I0717 19:59:02.465347 1103141 fix.go:56] fixHost completed within 24.594172049s
	I0717 19:59:02.465375 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.468945 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.469406 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.469443 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.469704 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.469972 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.470166 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.470301 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.470501 1103141 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:02.471120 1103141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I0717 19:59:02.471159 1103141 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:59:02.602921 1103141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623942.546317761
	
	I0717 19:59:02.602957 1103141 fix.go:206] guest clock: 1689623942.546317761
	I0717 19:59:02.602970 1103141 fix.go:219] Guest: 2023-07-17 19:59:02.546317761 +0000 UTC Remote: 2023-07-17 19:59:02.465351333 +0000 UTC m=+106.772168927 (delta=80.966428ms)
	I0717 19:59:02.603036 1103141 fix.go:190] guest clock delta is within tolerance: 80.966428ms
	I0717 19:59:02.603053 1103141 start.go:83] releasing machines lock for "embed-certs-114855", held for 24.731922082s
	I0717 19:59:02.604022 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.604447 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:02.608397 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.608991 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.609030 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.609308 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.610102 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.610386 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 19:59:02.610634 1103141 ssh_runner.go:195] Run: cat /version.json
	I0717 19:59:02.610677 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.611009 1103141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:59:02.611106 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 19:59:02.614739 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.615121 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.615253 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.616278 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.616386 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.616802 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:02.616829 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:02.617030 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 19:59:02.617096 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.617395 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 19:59:02.617442 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.617597 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 19:59:02.617826 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 19:59:02.618522 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	W0717 19:59:02.745192 1103141 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:59:02.745275 1103141 ssh_runner.go:195] Run: systemctl --version
	I0717 19:59:02.752196 1103141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:59:02.903288 1103141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:59:02.911818 1103141 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:59:02.911917 1103141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:59:02.933786 1103141 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:59:02.933883 1103141 start.go:469] detecting cgroup driver to use...
	I0717 19:59:02.934004 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:59:02.955263 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:59:02.974997 1103141 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:59:02.975077 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:59:02.994203 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:59:03.014446 1103141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:59:03.198307 1103141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:59:03.397392 1103141 docker.go:212] disabling docker service ...
	I0717 19:59:03.397591 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:59:03.418509 1103141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:59:03.437373 1103141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:59:03.613508 1103141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:59:03.739647 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:59:03.754406 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:59:03.777929 1103141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:59:03.778091 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.790606 1103141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:59:03.790721 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.804187 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.817347 1103141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:03.828813 1103141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:59:03.840430 1103141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:59:03.850240 1103141 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:59:03.850319 1103141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:59:03.865894 1103141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:59:03.882258 1103141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:59:04.017800 1103141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:59:04.248761 1103141 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:59:04.248842 1103141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:59:04.257893 1103141 start.go:537] Will wait 60s for crictl version
	I0717 19:59:04.257984 1103141 ssh_runner.go:195] Run: which crictl
	I0717 19:59:04.264221 1103141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:59:04.305766 1103141 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:59:04.305851 1103141 ssh_runner.go:195] Run: crio --version
	I0717 19:59:04.375479 1103141 ssh_runner.go:195] Run: crio --version
	I0717 19:59:04.436461 1103141 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 19:59:04.438378 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetIP
	I0717 19:59:04.442194 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:04.442754 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 19:59:04.442792 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 19:59:04.443221 1103141 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:59:04.448534 1103141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:04.465868 1103141 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:59:04.465946 1103141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:04.502130 1103141 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:59:04.502219 1103141 ssh_runner.go:195] Run: which lz4
	I0717 19:59:04.507394 1103141 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:59:04.512404 1103141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:59:04.512452 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 19:59:04.409929 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:04.419102 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:04.419138 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:04.910761 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:04.919844 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:04.919898 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:05.410298 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:05.424961 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:05.425002 1102415 api_server.go:103] status: https://192.168.72.51:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:05.910377 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 19:59:05.924698 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 200:
	ok
	I0717 19:59:05.949272 1102415 api_server.go:141] control plane version: v1.27.3
	I0717 19:59:05.949308 1102415 api_server.go:131] duration metric: took 6.707896837s to wait for apiserver health ...
	I0717 19:59:05.949321 1102415 cni.go:84] Creating CNI manager for ""
	I0717 19:59:05.949334 1102415 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:05.952250 1102415 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:59:02.634580 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Start
	I0717 19:59:02.635005 1101908 main.go:141] libmachine: (old-k8s-version-149000) Ensuring networks are active...
	I0717 19:59:02.635919 1101908 main.go:141] libmachine: (old-k8s-version-149000) Ensuring network default is active
	I0717 19:59:02.636328 1101908 main.go:141] libmachine: (old-k8s-version-149000) Ensuring network mk-old-k8s-version-149000 is active
	I0717 19:59:02.637168 1101908 main.go:141] libmachine: (old-k8s-version-149000) Getting domain xml...
	I0717 19:59:02.638177 1101908 main.go:141] libmachine: (old-k8s-version-149000) Creating domain...
	I0717 19:59:04.249328 1101908 main.go:141] libmachine: (old-k8s-version-149000) Waiting to get IP...
	I0717 19:59:04.250286 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:04.250925 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:04.251047 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:04.250909 1103733 retry.go:31] will retry after 305.194032ms: waiting for machine to come up
	I0717 19:59:04.558456 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:04.559354 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:04.559387 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:04.559290 1103733 retry.go:31] will retry after 338.882261ms: waiting for machine to come up
	I0717 19:59:04.900152 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:04.900673 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:04.900700 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:04.900616 1103733 retry.go:31] will retry after 334.664525ms: waiting for machine to come up
	I0717 19:59:05.236557 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:05.237252 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:05.237280 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:05.237121 1103733 retry.go:31] will retry after 410.314805ms: waiting for machine to come up
	I0717 19:59:05.648936 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:05.649630 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:05.649665 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:05.649572 1103733 retry.go:31] will retry after 482.724985ms: waiting for machine to come up
	I0717 19:59:06.135159 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:06.135923 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:06.135961 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:06.135851 1103733 retry.go:31] will retry after 646.078047ms: waiting for machine to come up
	I0717 19:59:06.783788 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:06.784327 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:06.784352 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:06.784239 1103733 retry.go:31] will retry after 1.176519187s: waiting for machine to come up
	I0717 19:59:05.954319 1102415 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:59:06.005185 1102415 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:59:06.070856 1102415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:59:06.086358 1102415 system_pods.go:59] 8 kube-system pods found
	I0717 19:59:06.086429 1102415 system_pods.go:61] "coredns-5d78c9869d-rjqsv" [f27e2de9-9849-40e2-b6dc-1ee27537b1e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:59:06.086448 1102415 system_pods.go:61] "etcd-default-k8s-diff-port-711413" [15f74e3f-61a1-4464-bbff-d336f6df4b6e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:59:06.086462 1102415 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-711413" [c164ab32-6a1d-4079-9b58-7da96eabc60e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:59:06.086481 1102415 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-711413" [7dc149c8-be03-4fd6-b945-c63e95a28470] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:59:06.086498 1102415 system_pods.go:61] "kube-proxy-9qfpg" [ecb84bb9-57a2-4a42-8104-b792d38479ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 19:59:06.086513 1102415 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-711413" [f2cb374f-674e-4a08-82e0-0932b732b485] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:59:06.086526 1102415 system_pods.go:61] "metrics-server-74d5c6b9c-hzcd7" [17e01399-9910-4f01-abe7-3eae271af1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:59:06.086536 1102415 system_pods.go:61] "storage-provisioner" [43705714-97f1-4b06-8eeb-04d60c22112a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 19:59:06.086546 1102415 system_pods.go:74] duration metric: took 15.663084ms to wait for pod list to return data ...
	I0717 19:59:06.086556 1102415 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:59:06.113146 1102415 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:59:06.113186 1102415 node_conditions.go:123] node cpu capacity is 2
	I0717 19:59:06.113203 1102415 node_conditions.go:105] duration metric: took 26.64051ms to run NodePressure ...
	I0717 19:59:06.113228 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:06.757768 1102415 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:59:06.770030 1102415 kubeadm.go:787] kubelet initialised
	I0717 19:59:06.770064 1102415 kubeadm.go:788] duration metric: took 12.262867ms waiting for restarted kubelet to initialise ...
	I0717 19:59:06.770077 1102415 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:06.782569 1102415 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.794688 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.794714 1102415 pod_ready.go:81] duration metric: took 12.110858ms waiting for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.794723 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.794732 1102415 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.812213 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.812265 1102415 pod_ready.go:81] duration metric: took 17.522572ms waiting for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.812281 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.812291 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.838241 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.838291 1102415 pod_ready.go:81] duration metric: took 25.986333ms waiting for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.838306 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.838318 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:06.869011 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.869127 1102415 pod_ready.go:81] duration metric: took 30.791681ms waiting for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:06.869155 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:06.869192 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:07.164422 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-proxy-9qfpg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.164521 1102415 pod_ready.go:81] duration metric: took 295.308967ms waiting for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:07.164549 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-proxy-9qfpg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.164570 1102415 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:07.571331 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.571370 1102415 pod_ready.go:81] duration metric: took 406.779012ms waiting for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:07.571383 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.571393 1102415 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:07.967699 1102415 pod_ready.go:97] node "default-k8s-diff-port-711413" hosting pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.967740 1102415 pod_ready.go:81] duration metric: took 396.334567ms waiting for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	E0717 19:59:07.967757 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-711413" hosting pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:07.967770 1102415 pod_ready.go:38] duration metric: took 1.197678353s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:07.967793 1102415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:59:08.014470 1102415 ops.go:34] apiserver oom_adj: -16
	I0717 19:59:08.014500 1102415 kubeadm.go:640] restartCluster took 22.633851106s
	I0717 19:59:08.014510 1102415 kubeadm.go:406] StartCluster complete in 22.683627985s
	I0717 19:59:08.014534 1102415 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:08.014622 1102415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:59:08.017393 1102415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:08.018018 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:59:08.018126 1102415 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:59:08.018273 1102415 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-711413"
	I0717 19:59:08.018300 1102415 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-711413"
	W0717 19:59:08.018309 1102415 addons.go:240] addon storage-provisioner should already be in state true
	I0717 19:59:08.018404 1102415 host.go:66] Checking if "default-k8s-diff-port-711413" exists ...
	I0717 19:59:08.018400 1102415 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-711413"
	I0717 19:59:08.018457 1102415 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-711413"
	W0717 19:59:08.018471 1102415 addons.go:240] addon metrics-server should already be in state true
	I0717 19:59:08.018538 1102415 host.go:66] Checking if "default-k8s-diff-port-711413" exists ...
	I0717 19:59:08.018864 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.018916 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.018950 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.018997 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.019087 1102415 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-711413"
	I0717 19:59:08.019108 1102415 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-711413"
	I0717 19:59:08.019378 1102415 config.go:182] Loaded profile config "default-k8s-diff-port-711413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:59:08.019724 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.019823 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.028311 1102415 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-711413" context rescaled to 1 replicas
	I0717 19:59:08.028363 1102415 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.51 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:59:08.031275 1102415 out.go:177] * Verifying Kubernetes components...
	I0717 19:59:08.033186 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:59:08.041793 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43183
	I0717 19:59:08.041831 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0717 19:59:08.042056 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0717 19:59:08.042525 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.042709 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.043195 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.043373 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.043382 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.043479 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.043486 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.043911 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.044078 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.044095 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.044514 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.044542 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.044773 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.044878 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.045003 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.045373 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.045401 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.065715 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0717 19:59:08.066371 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.067102 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.067128 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.067662 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.067824 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0717 19:59:08.068091 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.069488 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.070144 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.070163 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.070232 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:59:08.070672 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.070852 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.072648 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:59:08.075752 1102415 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:59:08.077844 1102415 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:04.355036 1102136 addons.go:502] enable addons completed in 3.125961318s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 19:59:06.268158 1102136 node_ready.go:58] node "no-preload-408472" has status "Ready":"False"
	I0717 19:59:08.079803 1102415 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:08.079826 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:59:08.079857 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:59:08.077802 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:59:08.079941 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:59:08.079958 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:59:08.078604 1102415 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-711413"
	W0717 19:59:08.080010 1102415 addons.go:240] addon default-storageclass should already be in state true
	I0717 19:59:08.080048 1102415 host.go:66] Checking if "default-k8s-diff-port-711413" exists ...
	I0717 19:59:08.080446 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.080498 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.084746 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.084836 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.085468 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:59:08.085502 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:59:08.085512 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.085534 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.085599 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:59:08.085738 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:59:08.085851 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:59:08.085998 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:59:08.086028 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:59:08.086182 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:59:08.086221 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:59:08.086298 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:59:08.103113 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41455
	I0717 19:59:08.103751 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.104389 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.104412 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.104985 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.105805 1102415 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:59:08.105846 1102415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:59:08.127906 1102415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I0717 19:59:08.129757 1102415 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:59:08.130713 1102415 main.go:141] libmachine: Using API Version  1
	I0717 19:59:08.130734 1102415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:59:08.131175 1102415 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:59:08.133060 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetState
	I0717 19:59:08.135496 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .DriverName
	I0717 19:59:08.135824 1102415 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:08.135840 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:59:08.135860 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHHostname
	I0717 19:59:08.139031 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.139443 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:d7:a9", ip: ""} in network mk-default-k8s-diff-port-711413: {Iface:virbr2 ExpiryTime:2023-07-17 20:58:31 +0000 UTC Type:0 Mac:52:54:00:7d:d7:a9 Iaid: IPaddr:192.168.72.51 Prefix:24 Hostname:default-k8s-diff-port-711413 Clientid:01:52:54:00:7d:d7:a9}
	I0717 19:59:08.139480 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | domain default-k8s-diff-port-711413 has defined IP address 192.168.72.51 and MAC address 52:54:00:7d:d7:a9 in network mk-default-k8s-diff-port-711413
	I0717 19:59:08.139855 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHPort
	I0717 19:59:08.140455 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHKeyPath
	I0717 19:59:08.140850 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .GetSSHUsername
	I0717 19:59:08.141145 1102415 sshutil.go:53] new ssh client: &{IP:192.168.72.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/default-k8s-diff-port-711413/id_rsa Username:docker}
	I0717 19:59:08.260742 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:59:08.260779 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:59:08.310084 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:59:08.310123 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:59:08.315228 1102415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:59:08.333112 1102415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:59:08.347265 1102415 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:08.347297 1102415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:59:08.446018 1102415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:59:08.602418 1102415 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:59:08.602481 1102415 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-711413" to be "Ready" ...
	I0717 19:59:06.789410 1103141 crio.go:444] Took 2.282067 seconds to copy over tarball
	I0717 19:59:06.789500 1103141 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:59:10.614919 1103141 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.825382729s)
	I0717 19:59:10.614956 1103141 crio.go:451] Took 3.825512 seconds to extract the tarball
	I0717 19:59:10.614970 1103141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:59:10.668773 1103141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:10.721815 1103141 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:59:10.721849 1103141 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:59:10.721928 1103141 ssh_runner.go:195] Run: crio config
	I0717 19:59:10.626470 1102415 node_ready.go:58] node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:11.522603 1102415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.189445026s)
	I0717 19:59:11.522668 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.522681 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.522703 1102415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.207433491s)
	I0717 19:59:11.522747 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.522762 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.523183 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.523208 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.523223 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.523234 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.523247 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.523700 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.523717 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.523768 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.525232 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.525259 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.525269 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.525278 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.526823 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.526841 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.526864 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.526878 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.526889 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.527158 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.527174 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.527190 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.546758 1102415 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.100689574s)
	I0717 19:59:11.546840 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.546856 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.548817 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.548900 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.548920 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.548946 1102415 main.go:141] libmachine: Making call to close driver server
	I0717 19:59:11.548966 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) Calling .Close
	I0717 19:59:11.549341 1102415 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:59:11.549360 1102415 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:59:11.549374 1102415 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-711413"
	I0717 19:59:11.549385 1102415 main.go:141] libmachine: (default-k8s-diff-port-711413) DBG | Closing plugin on server side
	I0717 19:59:11.629748 1102415 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:59:07.962879 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:07.963461 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:07.963494 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:07.963408 1103733 retry.go:31] will retry after 1.458776494s: waiting for machine to come up
	I0717 19:59:09.423815 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:09.424545 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:09.424578 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:09.424434 1103733 retry.go:31] will retry after 1.505416741s: waiting for machine to come up
	I0717 19:59:10.932450 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:10.932970 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:10.932999 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:10.932907 1103733 retry.go:31] will retry after 2.119238731s: waiting for machine to come up
	I0717 19:59:08.762965 1102136 node_ready.go:49] node "no-preload-408472" has status "Ready":"True"
	I0717 19:59:08.762999 1102136 node_ready.go:38] duration metric: took 7.016711148s waiting for node "no-preload-408472" to be "Ready" ...
	I0717 19:59:08.763010 1102136 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:08.770929 1102136 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.781876 1102136 pod_ready.go:92] pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:08.781916 1102136 pod_ready.go:81] duration metric: took 10.948677ms waiting for pod "coredns-5d78c9869d-9mxdj" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.781931 1102136 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.790806 1102136 pod_ready.go:92] pod "etcd-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:08.790842 1102136 pod_ready.go:81] duration metric: took 8.902354ms waiting for pod "etcd-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:08.790858 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:11.107348 1102136 pod_ready.go:102] pod "kube-apiserver-no-preload-408472" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:12.306923 1102136 pod_ready.go:92] pod "kube-apiserver-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.306956 1102136 pod_ready.go:81] duration metric: took 3.516087536s waiting for pod "kube-apiserver-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.306971 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.314504 1102136 pod_ready.go:92] pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.314541 1102136 pod_ready.go:81] duration metric: took 7.560269ms waiting for pod "kube-controller-manager-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.314557 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.323200 1102136 pod_ready.go:92] pod "kube-proxy-cntdn" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.323232 1102136 pod_ready.go:81] duration metric: took 8.667115ms waiting for pod "kube-proxy-cntdn" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.323246 1102136 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.367453 1102136 pod_ready.go:92] pod "kube-scheduler-no-preload-408472" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:12.367483 1102136 pod_ready.go:81] duration metric: took 44.229894ms waiting for pod "kube-scheduler-no-preload-408472" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:12.367494 1102136 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:11.776332 1102415 addons.go:502] enable addons completed in 3.758222459s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:59:13.118285 1102415 node_ready.go:58] node "default-k8s-diff-port-711413" has status "Ready":"False"
	I0717 19:59:10.806964 1103141 cni.go:84] Creating CNI manager for ""
	I0717 19:59:10.907820 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:10.908604 1103141 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:59:10.908671 1103141 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-114855 NodeName:embed-certs-114855 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:59:10.909456 1103141 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-114855"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:59:10.909661 1103141 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-114855 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-114855 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:59:10.909757 1103141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:59:10.933995 1103141 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:59:10.934116 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:59:10.949424 1103141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0717 19:59:10.971981 1103141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:59:10.995942 1103141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0717 19:59:11.021147 1103141 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I0717 19:59:11.027824 1103141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:11.046452 1103141 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855 for IP: 192.168.39.213
	I0717 19:59:11.046507 1103141 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:11.046722 1103141 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:59:11.046792 1103141 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:59:11.046890 1103141 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/client.key
	I0717 19:59:11.046974 1103141 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/apiserver.key.af9d86f2
	I0717 19:59:11.047032 1103141 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/proxy-client.key
	I0717 19:59:11.047198 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:59:11.047246 1103141 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:59:11.047262 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:59:11.047297 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:59:11.047330 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:59:11.047360 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:59:11.047422 1103141 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:11.048308 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:59:11.076826 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:59:11.116981 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:59:11.152433 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/embed-certs-114855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:59:11.186124 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:59:11.219052 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:59:11.251034 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:59:11.281026 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:59:11.314219 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:59:11.341636 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:59:11.372920 1103141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:59:11.403343 1103141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:59:11.428094 1103141 ssh_runner.go:195] Run: openssl version
	I0717 19:59:11.435909 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:59:11.455770 1103141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:11.463749 1103141 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:11.463851 1103141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:11.473784 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:59:11.490867 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:59:11.507494 1103141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:59:11.514644 1103141 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:59:11.514746 1103141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:59:11.523975 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:59:11.539528 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:59:11.552649 1103141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:59:11.559671 1103141 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:59:11.559757 1103141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:59:11.569190 1103141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:59:11.584473 1103141 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:59:11.590453 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:59:11.599427 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:59:11.607503 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:59:11.619641 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:59:11.627914 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:59:11.636600 1103141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:59:11.645829 1103141 kubeadm.go:404] StartCluster: {Name:embed-certs-114855 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-114855 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:59:11.645960 1103141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:59:11.646049 1103141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:11.704959 1103141 cri.go:89] found id: ""
	I0717 19:59:11.705078 1103141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:59:11.720588 1103141 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:59:11.720621 1103141 kubeadm.go:636] restartCluster start
	I0717 19:59:11.720697 1103141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:59:11.734693 1103141 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:11.736236 1103141 kubeconfig.go:92] found "embed-certs-114855" server: "https://192.168.39.213:8443"
	I0717 19:59:11.739060 1103141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:59:11.752975 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:11.753096 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:11.766287 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:12.266751 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:12.266867 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:12.281077 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:12.766565 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:12.766669 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:12.780460 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:13.267185 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:13.267305 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:13.286250 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:13.766474 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:13.766582 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:13.780973 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:14.266474 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:14.266565 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:14.283412 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:14.766783 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:14.766885 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:14.782291 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:15.266607 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:15.266721 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:15.279993 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:13.054320 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:13.054787 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:13.054821 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:13.054724 1103733 retry.go:31] will retry after 2.539531721s: waiting for machine to come up
	I0717 19:59:15.597641 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:15.598199 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:15.598235 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:15.598132 1103733 retry.go:31] will retry after 3.376944775s: waiting for machine to come up
	I0717 19:59:14.773506 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:16.778529 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:14.611538 1102415 node_ready.go:49] node "default-k8s-diff-port-711413" has status "Ready":"True"
	I0717 19:59:14.611573 1102415 node_ready.go:38] duration metric: took 6.009046151s waiting for node "default-k8s-diff-port-711413" to be "Ready" ...
	I0717 19:59:14.611583 1102415 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:14.620522 1102415 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.629345 1102415 pod_ready.go:92] pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:14.629380 1102415 pod_ready.go:81] duration metric: took 8.831579ms waiting for pod "coredns-5d78c9869d-rjqsv" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.629394 1102415 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.636756 1102415 pod_ready.go:92] pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:14.636781 1102415 pod_ready.go:81] duration metric: took 7.379506ms waiting for pod "etcd-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:14.636791 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.658668 1102415 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:16.658699 1102415 pod_ready.go:81] duration metric: took 2.021899463s waiting for pod "kube-apiserver-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.658715 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.667666 1102415 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:16.667695 1102415 pod_ready.go:81] duration metric: took 8.971091ms waiting for pod "kube-controller-manager-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.667709 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.677402 1102415 pod_ready.go:92] pod "kube-proxy-9qfpg" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:16.677433 1102415 pod_ready.go:81] duration metric: took 9.71529ms waiting for pod "kube-proxy-9qfpg" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:16.677448 1102415 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:17.011304 1102415 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:17.011332 1102415 pod_ready.go:81] duration metric: took 333.876392ms waiting for pod "kube-scheduler-default-k8s-diff-port-711413" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:17.011344 1102415 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:15.766793 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:15.766913 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:15.780587 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:16.266363 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:16.266491 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:16.281228 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:16.766575 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:16.766690 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:16.782127 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:17.266511 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:17.266610 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:17.282119 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:17.766652 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:17.766758 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:17.783972 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:18.266759 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:18.266855 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:18.284378 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:18.766574 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:18.766675 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:18.782934 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:19.266475 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:19.266577 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:19.280895 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:19.767307 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:19.767411 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:19.781007 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:20.266522 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:20.266646 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:20.280722 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:18.976814 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:18.977300 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:18.977326 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:18.977254 1103733 retry.go:31] will retry after 2.728703676s: waiting for machine to come up
	I0717 19:59:21.709422 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:21.709889 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | unable to find current IP address of domain old-k8s-version-149000 in network mk-old-k8s-version-149000
	I0717 19:59:21.709916 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | I0717 19:59:21.709841 1103733 retry.go:31] will retry after 5.373130791s: waiting for machine to come up
	I0717 19:59:19.273610 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:21.274431 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:19.419889 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:21.422395 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:23.423974 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:20.767398 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:20.767505 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:20.780641 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:21.266963 1103141 api_server.go:166] Checking apiserver status ...
	I0717 19:59:21.267053 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:21.280185 1103141 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:21.753855 1103141 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:59:21.753890 1103141 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:59:21.753905 1103141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:59:21.753969 1103141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:21.792189 1103141 cri.go:89] found id: ""
	I0717 19:59:21.792276 1103141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:59:21.809670 1103141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:59:21.820341 1103141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:59:21.820408 1103141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:21.830164 1103141 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:21.830194 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:21.961988 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:22.788248 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:23.013910 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:23.110334 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:23.204343 1103141 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:59:23.204448 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:23.721708 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:24.222046 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:24.721482 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:25.221523 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:25.721720 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:23.773347 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:26.275805 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:25.424115 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:27.920288 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:27.084831 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.085274 1101908 main.go:141] libmachine: (old-k8s-version-149000) Found IP for machine: 192.168.50.177
	I0717 19:59:27.085299 1101908 main.go:141] libmachine: (old-k8s-version-149000) Reserving static IP address...
	I0717 19:59:27.085332 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has current primary IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.085757 1101908 main.go:141] libmachine: (old-k8s-version-149000) Reserved static IP address: 192.168.50.177
	I0717 19:59:27.085799 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "old-k8s-version-149000", mac: "52:54:00:88:d8:03", ip: "192.168.50.177"} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.085821 1101908 main.go:141] libmachine: (old-k8s-version-149000) Waiting for SSH to be available...
	I0717 19:59:27.085855 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | skip adding static IP to network mk-old-k8s-version-149000 - found existing host DHCP lease matching {name: "old-k8s-version-149000", mac: "52:54:00:88:d8:03", ip: "192.168.50.177"}
	I0717 19:59:27.085880 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Getting to WaitForSSH function...
	I0717 19:59:27.088245 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.088569 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.088605 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.088777 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Using SSH client type: external
	I0717 19:59:27.088809 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa (-rw-------)
	I0717 19:59:27.088850 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:59:27.088866 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | About to run SSH command:
	I0717 19:59:27.088877 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | exit 0
	I0717 19:59:27.186039 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | SSH cmd err, output: <nil>: 
	I0717 19:59:27.186549 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetConfigRaw
	I0717 19:59:27.187427 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:27.190317 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.190738 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.190781 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.191089 1101908 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/config.json ...
	I0717 19:59:27.191343 1101908 machine.go:88] provisioning docker machine ...
	I0717 19:59:27.191369 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:27.191637 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetMachineName
	I0717 19:59:27.191875 1101908 buildroot.go:166] provisioning hostname "old-k8s-version-149000"
	I0717 19:59:27.191902 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetMachineName
	I0717 19:59:27.192058 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.194710 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.195141 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.195190 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.195472 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.195752 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.195938 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.196104 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.196308 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:27.196731 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:27.196746 1101908 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-149000 && echo "old-k8s-version-149000" | sudo tee /etc/hostname
	I0717 19:59:27.338648 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-149000
	
	I0717 19:59:27.338712 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.341719 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.342138 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.342176 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.342392 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.342666 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.342879 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.343036 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.343216 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:27.343733 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:27.343763 1101908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-149000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-149000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-149000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:59:27.478006 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:59:27.478054 1101908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 19:59:27.478109 1101908 buildroot.go:174] setting up certificates
	I0717 19:59:27.478130 1101908 provision.go:83] configureAuth start
	I0717 19:59:27.478150 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetMachineName
	I0717 19:59:27.478485 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:27.481425 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.481865 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.481900 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.482029 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.484825 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.485290 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.485326 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.485505 1101908 provision.go:138] copyHostCerts
	I0717 19:59:27.485604 1101908 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 19:59:27.485633 1101908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 19:59:27.485709 1101908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 19:59:27.485837 1101908 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 19:59:27.485849 1101908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 19:59:27.485879 1101908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 19:59:27.485957 1101908 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 19:59:27.485970 1101908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 19:59:27.485997 1101908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 19:59:27.486131 1101908 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-149000 san=[192.168.50.177 192.168.50.177 localhost 127.0.0.1 minikube old-k8s-version-149000]
	I0717 19:59:27.667436 1101908 provision.go:172] copyRemoteCerts
	I0717 19:59:27.667514 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:59:27.667551 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.670875 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.671304 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.671340 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.671600 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.671851 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.672053 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.672222 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 19:59:27.764116 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:59:27.795726 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 19:59:27.827532 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:59:27.859734 1101908 provision.go:86] duration metric: configureAuth took 381.584228ms
	I0717 19:59:27.859769 1101908 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:59:27.860014 1101908 config.go:182] Loaded profile config "old-k8s-version-149000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 19:59:27.860125 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:27.863330 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.863915 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:27.863969 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:27.864318 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:27.864559 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.864735 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:27.864931 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:27.865114 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:27.865768 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:27.865791 1101908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:59:28.221755 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:59:28.221788 1101908 machine.go:91] provisioned docker machine in 1.030429206s
	I0717 19:59:28.221802 1101908 start.go:300] post-start starting for "old-k8s-version-149000" (driver="kvm2")
	I0717 19:59:28.221817 1101908 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:59:28.221868 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.222236 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:59:28.222265 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.225578 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.226092 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.226130 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.226268 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.226511 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.226695 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.226875 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 19:59:28.321338 1101908 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:59:28.326703 1101908 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 19:59:28.326747 1101908 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 19:59:28.326843 1101908 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 19:59:28.326969 1101908 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 19:59:28.327239 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:59:28.337536 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:28.366439 1101908 start.go:303] post-start completed in 144.619105ms
	I0717 19:59:28.366476 1101908 fix.go:56] fixHost completed within 25.763256574s
	I0717 19:59:28.366510 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.369661 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.370194 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.370249 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.370470 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.370758 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.370956 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.371192 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.371476 1101908 main.go:141] libmachine: Using SSH client type: native
	I0717 19:59:28.371943 1101908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.177 22 <nil> <nil>}
	I0717 19:59:28.371970 1101908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:59:28.498983 1101908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689623968.431200547
	
	I0717 19:59:28.499015 1101908 fix.go:206] guest clock: 1689623968.431200547
	I0717 19:59:28.499025 1101908 fix.go:219] Guest: 2023-07-17 19:59:28.431200547 +0000 UTC Remote: 2023-07-17 19:59:28.366482535 +0000 UTC m=+386.593094928 (delta=64.718012ms)
	I0717 19:59:28.499083 1101908 fix.go:190] guest clock delta is within tolerance: 64.718012ms
	I0717 19:59:28.499090 1101908 start.go:83] releasing machines lock for "old-k8s-version-149000", held for 25.895913429s
	I0717 19:59:28.499122 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.499449 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:28.502760 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.503338 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.503395 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.503746 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.504549 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.504804 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 19:59:28.504907 1101908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:59:28.504995 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.505142 1101908 ssh_runner.go:195] Run: cat /version.json
	I0717 19:59:28.505175 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 19:59:28.508832 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.508868 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.509347 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.509384 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.509412 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:28.509431 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:28.509539 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.509827 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 19:59:28.509888 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.510074 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.510126 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 19:59:28.510292 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 19:59:28.510284 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 19:59:28.510442 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	W0717 19:59:28.604171 1101908 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:59:28.604283 1101908 ssh_runner.go:195] Run: systemctl --version
	I0717 19:59:28.637495 1101908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:59:28.790306 1101908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:59:28.797261 1101908 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:59:28.797343 1101908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:59:28.822016 1101908 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:59:28.822056 1101908 start.go:469] detecting cgroup driver to use...
	I0717 19:59:28.822144 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:59:28.843785 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:59:28.863178 1101908 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:59:28.863248 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:59:28.880265 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:59:28.897122 1101908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:59:29.019759 1101908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:59:29.166490 1101908 docker.go:212] disabling docker service ...
	I0717 19:59:29.166561 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:59:29.188125 1101908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:59:29.205693 1101908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:59:29.336805 1101908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:59:29.478585 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:59:29.494755 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:59:29.516478 1101908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0717 19:59:29.516633 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.527902 1101908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:59:29.528000 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.539443 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.551490 1101908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:59:29.563407 1101908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:59:29.577575 1101908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:59:29.587749 1101908 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:59:29.587839 1101908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:59:29.602120 1101908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:59:29.613647 1101908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:59:29.730721 1101908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:59:29.907780 1101908 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:59:29.907916 1101908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:59:29.913777 1101908 start.go:537] Will wait 60s for crictl version
	I0717 19:59:29.913855 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:29.921083 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:59:29.955985 1101908 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 19:59:29.956099 1101908 ssh_runner.go:195] Run: crio --version
	I0717 19:59:30.011733 1101908 ssh_runner.go:195] Run: crio --version
	I0717 19:59:30.068591 1101908 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0717 19:59:25.744228 1103141 api_server.go:72] duration metric: took 2.539876638s to wait for apiserver process to appear ...
	I0717 19:59:25.744263 1103141 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:59:25.744295 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:25.744850 1103141 api_server.go:269] stopped: https://192.168.39.213:8443/healthz: Get "https://192.168.39.213:8443/healthz": dial tcp 192.168.39.213:8443: connect: connection refused
	I0717 19:59:26.245930 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.163298 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:29.163345 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:29.163362 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.197738 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:29.197812 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:29.245946 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.261723 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:29.261777 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:29.745343 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:29.753999 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:29.754040 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:30.245170 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:30.253748 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:30.253809 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:30.745290 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:30.760666 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:59:30.760706 1103141 api_server.go:103] status: https://192.168.39.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:59:31.244952 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 19:59:31.262412 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0717 19:59:31.284253 1103141 api_server.go:141] control plane version: v1.27.3
	I0717 19:59:31.284290 1103141 api_server.go:131] duration metric: took 5.540019245s to wait for apiserver health ...
	I0717 19:59:31.284303 1103141 cni.go:84] Creating CNI manager for ""
	I0717 19:59:31.284316 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:31.286828 1103141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:59:30.070665 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetIP
	I0717 19:59:30.074049 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:30.074479 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 19:59:30.074503 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 19:59:30.074871 1101908 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 19:59:30.080177 1101908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:30.094479 1101908 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 19:59:30.094543 1101908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:30.130526 1101908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 19:59:30.130599 1101908 ssh_runner.go:195] Run: which lz4
	I0717 19:59:30.135920 1101908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:59:30.140678 1101908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:59:30.140723 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0717 19:59:28.772996 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:30.785175 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:33.273857 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:30.427017 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:32.920586 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:31.288869 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:59:31.323116 1103141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:59:31.368054 1103141 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:59:31.392061 1103141 system_pods.go:59] 8 kube-system pods found
	I0717 19:59:31.392110 1103141 system_pods.go:61] "coredns-5d78c9869d-rgdz8" [d1cc8cd3-70eb-4315-89d9-40d4ef97a5c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:59:31.392122 1103141 system_pods.go:61] "etcd-embed-certs-114855" [4c8e5fe0-e26e-4244-b284-5a42b4247614] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:59:31.392136 1103141 system_pods.go:61] "kube-apiserver-embed-certs-114855" [3cc43f5e-6c56-4587-a69a-ce58c12f500d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:59:31.392146 1103141 system_pods.go:61] "kube-controller-manager-embed-certs-114855" [cadca801-1feb-45f9-ac3c-eca697f1919f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:59:31.392157 1103141 system_pods.go:61] "kube-proxy-lkncr" [9ec4e4ac-81a5-4547-ab3e-6a3db21cc19d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 19:59:31.392166 1103141 system_pods.go:61] "kube-scheduler-embed-certs-114855" [0e9a0435-a1d5-42bc-a051-1587cd479ac6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:59:31.392184 1103141 system_pods.go:61] "metrics-server-74d5c6b9c-pshr5" [2d4e6b33-c325-4aa5-8458-b604be762cbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:59:31.392192 1103141 system_pods.go:61] "storage-provisioner" [4f7b39f3-3fc5-4e41-9f58-aa1d938ce06f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 19:59:31.392199 1103141 system_pods.go:74] duration metric: took 24.119934ms to wait for pod list to return data ...
	I0717 19:59:31.392210 1103141 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:59:31.405136 1103141 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:59:31.405178 1103141 node_conditions.go:123] node cpu capacity is 2
	I0717 19:59:31.405192 1103141 node_conditions.go:105] duration metric: took 12.975462ms to run NodePressure ...
	I0717 19:59:31.405221 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:32.158757 1103141 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:59:32.167221 1103141 kubeadm.go:787] kubelet initialised
	I0717 19:59:32.167263 1103141 kubeadm.go:788] duration metric: took 8.462047ms waiting for restarted kubelet to initialise ...
	I0717 19:59:32.167277 1103141 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:59:32.178888 1103141 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:34.199125 1103141 pod_ready.go:102] pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:32.017439 1101908 crio.go:444] Took 1.881555 seconds to copy over tarball
	I0717 19:59:32.017535 1101908 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:59:35.573024 1101908 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.55545349s)
	I0717 19:59:35.573070 1101908 crio.go:451] Took 3.555594 seconds to extract the tarball
	I0717 19:59:35.573081 1101908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:59:35.622240 1101908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:59:35.672113 1101908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 19:59:35.672149 1101908 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:59:35.672223 1101908 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:35.672279 1101908 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:35.672325 1101908 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.672344 1101908 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:35.672453 1101908 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:35.672533 1101908 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 19:59:35.672545 1101908 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:35.672645 1101908 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 19:59:35.674063 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:35.674110 1101908 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 19:59:35.674127 1101908 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.674114 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:35.674068 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:35.674075 1101908 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:35.674208 1101908 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:35.674236 1101908 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 19:59:35.835219 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.840811 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:35.855242 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 19:59:35.857212 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:35.860547 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:35.864234 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:35.864519 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 19:59:35.958693 1101908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:59:35.980110 1101908 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 19:59:35.980198 1101908 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:35.980258 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.057216 1101908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 19:59:36.057278 1101908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:36.057301 1101908 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 19:59:36.057334 1101908 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0717 19:59:36.057342 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.057362 1101908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 19:59:36.057383 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.057412 1101908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:36.057451 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.066796 1101908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 19:59:36.066859 1101908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:36.066944 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.084336 1101908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 19:59:36.084398 1101908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:36.084439 1101908 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 19:59:36.084454 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.084479 1101908 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 19:59:36.084520 1101908 ssh_runner.go:195] Run: which crictl
	I0717 19:59:36.208377 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0717 19:59:36.208641 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 19:59:36.208730 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0717 19:59:36.208827 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 19:59:36.208839 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 19:59:36.208879 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 19:59:36.208922 1101908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0717 19:59:36.375090 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 19:59:36.375371 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 19:59:36.383660 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 19:59:36.383770 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 19:59:36.383841 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 19:59:36.383872 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 19:59:36.383950 1101908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0717 19:59:36.383986 1101908 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 19:59:36.388877 1101908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0717 19:59:36.388897 1101908 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0717 19:59:36.388941 1101908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0717 19:59:35.275990 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:37.773385 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:34.927926 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:36.940406 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:36.219570 1103141 pod_ready.go:102] pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:37.338137 1103141 pod_ready.go:92] pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:37.338209 1103141 pod_ready.go:81] duration metric: took 5.159283632s waiting for pod "coredns-5d78c9869d-rgdz8" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:37.338228 1103141 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:39.354623 1103141 pod_ready.go:102] pod "etcd-embed-certs-114855" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:37.751639 1101908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.362667245s)
	I0717 19:59:37.751681 1101908 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0717 19:59:37.751736 1101908 cache_images.go:92] LoadImages completed in 2.079569378s
	W0717 19:59:37.751899 1101908 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0717 19:59:37.752005 1101908 ssh_runner.go:195] Run: crio config
	I0717 19:59:37.844809 1101908 cni.go:84] Creating CNI manager for ""
	I0717 19:59:37.844845 1101908 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:37.844870 1101908 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:59:37.844896 1101908 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.177 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-149000 NodeName:old-k8s-version-149000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 19:59:37.845116 1101908 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-149000"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-149000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.177:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:59:37.845228 1101908 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-149000 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-149000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:59:37.845312 1101908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 19:59:37.859556 1101908 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:59:37.859640 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:59:37.872740 1101908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 19:59:37.891132 1101908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:59:37.911902 1101908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0717 19:59:37.933209 1101908 ssh_runner.go:195] Run: grep 192.168.50.177	control-plane.minikube.internal$ /etc/hosts
	I0717 19:59:37.937317 1101908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:59:37.950660 1101908 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000 for IP: 192.168.50.177
	I0717 19:59:37.950706 1101908 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:59:37.950921 1101908 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 19:59:37.950998 1101908 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 19:59:37.951128 1101908 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.key
	I0717 19:59:37.951227 1101908 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/apiserver.key.c699d2bc
	I0717 19:59:37.951298 1101908 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/proxy-client.key
	I0717 19:59:37.951487 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 19:59:37.951529 1101908 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 19:59:37.951541 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:59:37.951567 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:59:37.951593 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:59:37.951634 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 19:59:37.951691 1101908 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 19:59:37.952597 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:59:37.980488 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:59:38.008389 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:59:38.037605 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:59:38.066142 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:59:38.095838 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:59:38.123279 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:59:38.158528 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:59:38.190540 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:59:38.218519 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 19:59:38.245203 1101908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 19:59:38.273077 1101908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:59:38.292610 1101908 ssh_runner.go:195] Run: openssl version
	I0717 19:59:38.298983 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 19:59:38.311477 1101908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 19:59:38.316847 1101908 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 19:59:38.316914 1101908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 19:59:38.323114 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 19:59:38.334773 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 19:59:38.346327 1101908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 19:59:38.351639 1101908 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 19:59:38.351712 1101908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 19:59:38.357677 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:59:38.369278 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:59:38.380948 1101908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:38.386116 1101908 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:38.386181 1101908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:59:38.392204 1101908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:59:38.404677 1101908 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:59:38.409861 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:59:38.416797 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:59:38.424606 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:59:38.431651 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:59:38.439077 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:59:38.445660 1101908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:59:38.452464 1101908 kubeadm.go:404] StartCluster: {Name:old-k8s-version-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-149000 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.177 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:59:38.452656 1101908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:59:38.452738 1101908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:38.485873 1101908 cri.go:89] found id: ""
	I0717 19:59:38.485972 1101908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:59:38.496998 1101908 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:59:38.497033 1101908 kubeadm.go:636] restartCluster start
	I0717 19:59:38.497096 1101908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:59:38.508054 1101908 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:38.509416 1101908 kubeconfig.go:92] found "old-k8s-version-149000" server: "https://192.168.50.177:8443"
	I0717 19:59:38.512586 1101908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:59:38.524412 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:38.524486 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:38.537824 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:39.038221 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:39.038331 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:39.053301 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:39.538741 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:39.538834 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:39.552525 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:40.038056 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:40.038173 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:40.052410 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:40.537953 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:40.538060 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:40.551667 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:41.038241 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:41.038361 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:41.053485 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:41.538300 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:41.538402 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:41.552741 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:39.773598 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:42.273083 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:39.423700 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:41.918498 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:43.918876 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:40.856641 1103141 pod_ready.go:92] pod "etcd-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:40.856671 1103141 pod_ready.go:81] duration metric: took 3.518433579s waiting for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:40.856684 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.377156 1103141 pod_ready.go:92] pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.377186 1103141 pod_ready.go:81] duration metric: took 1.520494525s waiting for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.377196 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.387651 1103141 pod_ready.go:92] pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.387680 1103141 pod_ready.go:81] duration metric: took 10.47667ms waiting for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.387692 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lkncr" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.394735 1103141 pod_ready.go:92] pod "kube-proxy-lkncr" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.394770 1103141 pod_ready.go:81] duration metric: took 7.070744ms waiting for pod "kube-proxy-lkncr" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.394784 1103141 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.402496 1103141 pod_ready.go:92] pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 19:59:42.402530 1103141 pod_ready.go:81] duration metric: took 7.737273ms waiting for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:42.402544 1103141 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace to be "Ready" ...
	I0717 19:59:44.460075 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:42.038941 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:42.039027 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:42.054992 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:42.538144 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:42.538257 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:42.552160 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:43.038484 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:43.038599 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:43.052649 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:43.538407 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:43.538511 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:43.552927 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:44.038266 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:44.038396 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:44.051851 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:44.538425 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:44.538520 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:44.551726 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:45.038244 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:45.038359 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:45.053215 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:45.538908 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:45.539008 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:45.552009 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:46.038089 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:46.038204 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:46.051955 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:46.538209 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:46.538311 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:46.552579 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:44.273154 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:46.772548 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:45.919143 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:47.919930 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:46.964219 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:49.459411 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:47.038345 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:47.038434 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:47.051506 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:47.538770 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:47.538855 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:47.551813 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:48.038766 1101908 api_server.go:166] Checking apiserver status ...
	I0717 19:59:48.038900 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:59:48.053717 1101908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:59:48.524471 1101908 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:59:48.524521 1101908 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:59:48.524542 1101908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:59:48.524625 1101908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:59:48.564396 1101908 cri.go:89] found id: ""
	I0717 19:59:48.564475 1101908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:59:48.582891 1101908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:59:48.594121 1101908 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:59:48.594212 1101908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:48.604963 1101908 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:59:48.604998 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:48.756875 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:49.645754 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:49.876047 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:49.996960 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:50.109251 1101908 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:59:50.109337 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:50.630868 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:51.130836 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:51.630446 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:59:51.659578 1101908 api_server.go:72] duration metric: took 1.550325604s to wait for apiserver process to appear ...
	I0717 19:59:51.659605 1101908 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:59:51.659625 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:48.773967 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:50.775054 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:53.274949 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:49.922365 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:52.422385 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:51.459819 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:53.958809 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:56.660515 1101908 api_server.go:269] stopped: https://192.168.50.177:8443/healthz: Get "https://192.168.50.177:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 19:59:55.773902 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:58.274862 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:54.427715 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:56.922668 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:57.161458 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:57.720749 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:57.720797 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:57.720816 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:57.828454 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:59:57.828489 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:59:58.160896 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:58.173037 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 19:59:58.173072 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 19:59:58.660738 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:58.672508 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 19:59:58.672551 1101908 api_server.go:103] status: https://192.168.50.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 19:59:59.161133 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 19:59:59.169444 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 200:
	ok
	I0717 19:59:59.179637 1101908 api_server.go:141] control plane version: v1.16.0
	I0717 19:59:59.179675 1101908 api_server.go:131] duration metric: took 7.520063574s to wait for apiserver health ...
	I0717 19:59:59.179689 1101908 cni.go:84] Creating CNI manager for ""
	I0717 19:59:59.179703 1101908 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:59:59.182357 1101908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:59:55.959106 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:58.458415 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:00.458582 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:59.184702 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:59:59.197727 1101908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 19:59:59.226682 1101908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:59:59.237874 1101908 system_pods.go:59] 7 kube-system pods found
	I0717 19:59:59.237911 1101908 system_pods.go:61] "coredns-5644d7b6d9-g7fjx" [f9f27bce-aaf6-43f8-8a4b-a87230ceed0e] Running
	I0717 19:59:59.237917 1101908 system_pods.go:61] "etcd-old-k8s-version-149000" [2c732d6d-8a38-401d-aebf-e439c7fcf530] Running
	I0717 19:59:59.237922 1101908 system_pods.go:61] "kube-apiserver-old-k8s-version-149000" [b7f2c355-86cd-4d4c-b7da-043094174829] Running
	I0717 19:59:59.237927 1101908 system_pods.go:61] "kube-controller-manager-old-k8s-version-149000" [30f723aa-a978-4fbb-9210-43a29284aa41] Running
	I0717 19:59:59.237931 1101908 system_pods.go:61] "kube-proxy-f68hg" [a39dea78-e9bb-4f1b-8615-a51a42c6d13f] Running
	I0717 19:59:59.237935 1101908 system_pods.go:61] "kube-scheduler-old-k8s-version-149000" [a84bce5d-82af-4282-a36f-0d1031715a1a] Running
	I0717 19:59:59.237938 1101908 system_pods.go:61] "storage-provisioner" [c5e96cda-ddbc-4d29-86c3-d7ac4c717f61] Running
	I0717 19:59:59.237944 1101908 system_pods.go:74] duration metric: took 11.222716ms to wait for pod list to return data ...
	I0717 19:59:59.237952 1101908 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:59:59.241967 1101908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 19:59:59.242003 1101908 node_conditions.go:123] node cpu capacity is 2
	I0717 19:59:59.242051 1101908 node_conditions.go:105] duration metric: took 4.091279ms to run NodePressure ...
	I0717 19:59:59.242080 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:59:59.612659 1101908 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:59:59.623317 1101908 retry.go:31] will retry after 338.189596ms: kubelet not initialised
	I0717 19:59:59.972718 1101908 retry.go:31] will retry after 522.339878ms: kubelet not initialised
	I0717 20:00:00.503134 1101908 retry.go:31] will retry after 523.863562ms: kubelet not initialised
	I0717 20:00:01.032819 1101908 retry.go:31] will retry after 993.099088ms: kubelet not initialised
	I0717 20:00:00.773342 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:02.775558 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 19:59:59.424228 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:01.424791 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:03.920321 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:02.462125 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:04.960081 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:02.031287 1101908 retry.go:31] will retry after 1.744721946s: kubelet not initialised
	I0717 20:00:03.780335 1101908 retry.go:31] will retry after 2.704259733s: kubelet not initialised
	I0717 20:00:06.491260 1101908 retry.go:31] will retry after 2.934973602s: kubelet not initialised
	I0717 20:00:05.273963 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:07.772710 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:06.428014 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:08.920105 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:07.459314 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:09.959084 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:09.433009 1101908 retry.go:31] will retry after 2.28873038s: kubelet not initialised
	I0717 20:00:11.729010 1101908 retry.go:31] will retry after 4.261199393s: kubelet not initialised
	I0717 20:00:09.772754 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:11.773102 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:11.424610 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:13.922384 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:11.959437 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:14.459152 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:15.999734 1101908 retry.go:31] will retry after 8.732603244s: kubelet not initialised
	I0717 20:00:14.278965 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:16.772786 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:16.424980 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:18.919729 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:16.460363 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:18.960012 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:18.773609 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:21.272529 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:23.272642 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:20.922495 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:23.422032 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:21.460808 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:23.959242 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:24.739282 1101908 retry.go:31] will retry after 8.040459769s: kubelet not initialised
	I0717 20:00:25.274297 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:27.773410 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:25.923167 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:28.420939 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:25.959431 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:27.960549 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:30.459601 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:30.274460 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.276595 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:30.428741 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.919601 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.459855 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:34.960084 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:32.784544 1101908 kubeadm.go:787] kubelet initialised
	I0717 20:00:32.784571 1101908 kubeadm.go:788] duration metric: took 33.171875609s waiting for restarted kubelet to initialise ...
	I0717 20:00:32.784579 1101908 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:00:32.789500 1101908 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9x6xd" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.795369 1101908 pod_ready.go:92] pod "coredns-5644d7b6d9-9x6xd" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.795396 1101908 pod_ready.go:81] duration metric: took 5.860061ms waiting for pod "coredns-5644d7b6d9-9x6xd" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.795406 1101908 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-g7fjx" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.800899 1101908 pod_ready.go:92] pod "coredns-5644d7b6d9-g7fjx" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.800922 1101908 pod_ready.go:81] duration metric: took 5.509805ms waiting for pod "coredns-5644d7b6d9-g7fjx" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.800931 1101908 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.806100 1101908 pod_ready.go:92] pod "etcd-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.806123 1101908 pod_ready.go:81] duration metric: took 5.185189ms waiting for pod "etcd-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.806139 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.810963 1101908 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:32.810990 1101908 pod_ready.go:81] duration metric: took 4.843622ms waiting for pod "kube-apiserver-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:32.811000 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.183907 1101908 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:33.183945 1101908 pod_ready.go:81] duration metric: took 372.931164ms waiting for pod "kube-controller-manager-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.183961 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f68hg" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.585028 1101908 pod_ready.go:92] pod "kube-proxy-f68hg" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:33.585064 1101908 pod_ready.go:81] duration metric: took 401.095806ms waiting for pod "kube-proxy-f68hg" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.585075 1101908 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.984668 1101908 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-149000" in "kube-system" namespace has status "Ready":"True"
	I0717 20:00:33.984702 1101908 pod_ready.go:81] duration metric: took 399.618516ms waiting for pod "kube-scheduler-old-k8s-version-149000" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:33.984719 1101908 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace to be "Ready" ...
	I0717 20:00:36.392779 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:34.774126 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:37.273706 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:34.921839 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:37.434861 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:37.460518 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:39.960345 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:38.393483 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:40.893085 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:39.773390 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:41.773759 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:39.920512 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:41.920773 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:43.921648 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:42.458830 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:44.958864 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:43.393911 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:45.395481 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:44.273504 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:46.772509 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:45.923812 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:48.422996 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:47.459707 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:49.960056 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:47.892578 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:50.393881 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:48.774960 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:51.273048 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:50.919768 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:52.920372 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:52.458962 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:54.460345 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:52.892172 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:54.893802 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:53.775343 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:56.272701 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:55.427664 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:57.919163 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:56.961203 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:59.458439 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:57.393429 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:59.892089 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:58.772852 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:00.773814 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:03.272058 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:00:59.920118 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:01.920524 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:01.459281 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:03.460348 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:01.892908 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:04.392588 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:06.393093 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:05.272559 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:07.273883 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:04.421056 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:06.931053 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:05.960254 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:08.457727 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:10.459842 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:08.394141 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:10.892223 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:09.772505 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:11.772971 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:09.422626 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:11.423328 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:13.424365 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:12.958612 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:14.965490 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:12.893418 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:15.394472 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:14.272688 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:16.273685 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:15.919394 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:17.923047 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:17.460160 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:19.958439 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:17.894003 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:19.894407 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:18.772990 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:21.272821 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:23.273740 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:20.427751 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:22.920375 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:21.959239 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:23.959721 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:22.392669 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:24.392858 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:26.392896 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:25.773792 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:28.272610 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:25.423969 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:27.920156 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:25.960648 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:28.460460 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:28.393135 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:30.892597 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:30.273479 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:32.772964 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:29.920769 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:31.921078 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:30.959214 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:33.459431 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:32.892662 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:34.893997 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:35.271152 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:37.273194 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:34.423090 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:36.920078 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:35.960397 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:38.458322 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:40.459780 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:37.393337 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:39.394287 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:39.772604 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:42.273098 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:39.421175 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:41.422356 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:43.920740 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:42.959038 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:45.461396 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:41.891807 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:43.892286 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:45.894698 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:44.772741 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:46.774412 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:46.424856 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:48.425180 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:47.959378 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:49.960002 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:48.392683 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:50.393690 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:49.275313 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:51.773822 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:50.919701 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:52.919921 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:52.459957 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:54.958709 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:52.894991 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:55.392555 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:54.273372 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:56.775369 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:54.920834 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:56.921032 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:57.458730 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.460912 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:57.393828 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.892700 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.272482 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.774098 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:01:59.429623 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.920129 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:03.920308 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.958119 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:03.958450 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:01.894130 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:03.894522 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:05.895253 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:04.273903 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:06.773689 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:06.424487 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.427374 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:05.961652 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.457716 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:10.458998 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.392784 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:10.393957 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:08.774235 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:11.272040 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:13.273524 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:10.920257 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:12.921203 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:12.459321 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:14.460373 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:12.893440 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:15.392849 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:15.774096 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:18.274263 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:15.421911 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:17.922223 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:16.461304 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:18.958236 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:17.393857 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:19.893380 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:20.274441 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:22.773139 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:20.426046 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:22.919646 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:20.959049 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:23.460465 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:22.392918 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:24.892470 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:25.273192 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:27.273498 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:24.919892 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:26.921648 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:25.961037 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:28.458547 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:26.893611 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:29.393411 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:31.393789 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:29.771999 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:31.772639 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:29.419744 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:31.420846 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:33.422484 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:30.958391 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:33.457895 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:35.459845 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:33.893731 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:36.393503 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:34.272758 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:36.275172 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:35.920446 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:37.922565 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:37.460196 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:39.957808 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:38.394837 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:40.900948 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:38.772728 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:40.773003 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:43.273981 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:40.421480 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:42.919369 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:42.458683 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:44.458762 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:43.392899 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:45.893528 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:45.774587 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:48.273073 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:45.422093 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:47.429470 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:46.958556 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:49.457855 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:47.895376 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:50.392344 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:50.771704 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:52.772560 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:49.918779 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:51.919087 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:51.463426 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:53.957695 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:52.894219 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:54.894786 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:55.273619 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:57.775426 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:54.421093 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:56.424484 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:58.921289 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:55.959421 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:57.960287 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:00.460659 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:57.393604 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:02:59.394180 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:00.272948 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:02.274904 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:01.421007 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:03.422071 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:02.965138 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:05.458181 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:01.891831 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:03.892978 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:05.895017 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:04.772127 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:07.274312 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:05.920564 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:08.420835 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:07.459555 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:09.460645 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:08.392743 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:10.892887 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:09.772353 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:11.772877 1102136 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:12.368174 1102136 pod_ready.go:81] duration metric: took 4m0.000660307s waiting for pod "metrics-server-74d5c6b9c-hnngh" in "kube-system" namespace to be "Ready" ...
	E0717 20:03:12.368224 1102136 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:03:12.368251 1102136 pod_ready.go:38] duration metric: took 4m3.60522468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:03:12.368299 1102136 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:03:12.368343 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:12.368422 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:12.425640 1102136 cri.go:89] found id: "eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:12.425667 1102136 cri.go:89] found id: ""
	I0717 20:03:12.425684 1102136 logs.go:284] 1 containers: [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3]
	I0717 20:03:12.425749 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.430857 1102136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:12.430926 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:12.464958 1102136 cri.go:89] found id: "4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:12.464987 1102136 cri.go:89] found id: ""
	I0717 20:03:12.464996 1102136 logs.go:284] 1 containers: [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc]
	I0717 20:03:12.465063 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.470768 1102136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:12.470865 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:12.509622 1102136 cri.go:89] found id: "63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:12.509655 1102136 cri.go:89] found id: ""
	I0717 20:03:12.509665 1102136 logs.go:284] 1 containers: [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce]
	I0717 20:03:12.509718 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.514266 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:12.514346 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:12.556681 1102136 cri.go:89] found id: "0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:12.556705 1102136 cri.go:89] found id: ""
	I0717 20:03:12.556713 1102136 logs.go:284] 1 containers: [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5]
	I0717 20:03:12.556779 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.561653 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:12.561749 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:12.595499 1102136 cri.go:89] found id: "c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:12.595527 1102136 cri.go:89] found id: ""
	I0717 20:03:12.595537 1102136 logs.go:284] 1 containers: [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a]
	I0717 20:03:12.595603 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.600644 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:12.600728 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:12.635293 1102136 cri.go:89] found id: "2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:12.635327 1102136 cri.go:89] found id: ""
	I0717 20:03:12.635341 1102136 logs.go:284] 1 containers: [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9]
	I0717 20:03:12.635409 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.640445 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:12.640612 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:12.679701 1102136 cri.go:89] found id: ""
	I0717 20:03:12.679738 1102136 logs.go:284] 0 containers: []
	W0717 20:03:12.679748 1102136 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:12.679755 1102136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:12.679817 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:12.711772 1102136 cri.go:89] found id: "434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:12.711815 1102136 cri.go:89] found id: "cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:12.711822 1102136 cri.go:89] found id: ""
	I0717 20:03:12.711833 1102136 logs.go:284] 2 containers: [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379]
	I0717 20:03:12.711904 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.716354 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:12.720769 1102136 logs.go:123] Gathering logs for kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] ...
	I0717 20:03:12.720806 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:12.757719 1102136 logs.go:123] Gathering logs for kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] ...
	I0717 20:03:12.757766 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:12.804972 1102136 logs.go:123] Gathering logs for coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] ...
	I0717 20:03:12.805019 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:12.841021 1102136 logs.go:123] Gathering logs for kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] ...
	I0717 20:03:12.841055 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:12.890140 1102136 logs.go:123] Gathering logs for storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] ...
	I0717 20:03:12.890185 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:12.926177 1102136 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:12.926219 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:12.985838 1102136 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:12.985904 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:13.003223 1102136 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:13.003257 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:13.180312 1102136 logs.go:123] Gathering logs for kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] ...
	I0717 20:03:13.180361 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:13.234663 1102136 logs.go:123] Gathering logs for etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] ...
	I0717 20:03:13.234711 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:13.297008 1102136 logs.go:123] Gathering logs for storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] ...
	I0717 20:03:13.297065 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:13.335076 1102136 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:13.335110 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:10.919208 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:12.921588 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:11.958471 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:13.959630 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:12.893125 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:15.392702 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:13.901775 1102136 logs.go:123] Gathering logs for container status ...
	I0717 20:03:13.901828 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:16.451075 1102136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:03:16.470892 1102136 api_server.go:72] duration metric: took 4m15.23519157s to wait for apiserver process to appear ...
	I0717 20:03:16.470922 1102136 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:03:16.470963 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:16.471033 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:16.515122 1102136 cri.go:89] found id: "eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:16.515151 1102136 cri.go:89] found id: ""
	I0717 20:03:16.515161 1102136 logs.go:284] 1 containers: [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3]
	I0717 20:03:16.515217 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.519734 1102136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:16.519828 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:16.552440 1102136 cri.go:89] found id: "4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:16.552491 1102136 cri.go:89] found id: ""
	I0717 20:03:16.552503 1102136 logs.go:284] 1 containers: [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc]
	I0717 20:03:16.552569 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.557827 1102136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:16.557935 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:16.598317 1102136 cri.go:89] found id: "63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:16.598344 1102136 cri.go:89] found id: ""
	I0717 20:03:16.598354 1102136 logs.go:284] 1 containers: [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce]
	I0717 20:03:16.598425 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.604234 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:16.604331 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:16.638321 1102136 cri.go:89] found id: "0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:16.638349 1102136 cri.go:89] found id: ""
	I0717 20:03:16.638360 1102136 logs.go:284] 1 containers: [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5]
	I0717 20:03:16.638429 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.642755 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:16.642840 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:16.681726 1102136 cri.go:89] found id: "c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:16.681763 1102136 cri.go:89] found id: ""
	I0717 20:03:16.681776 1102136 logs.go:284] 1 containers: [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a]
	I0717 20:03:16.681848 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.686317 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:16.686394 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:16.723303 1102136 cri.go:89] found id: "2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:16.723328 1102136 cri.go:89] found id: ""
	I0717 20:03:16.723337 1102136 logs.go:284] 1 containers: [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9]
	I0717 20:03:16.723387 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.727491 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:16.727586 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:16.756931 1102136 cri.go:89] found id: ""
	I0717 20:03:16.756960 1102136 logs.go:284] 0 containers: []
	W0717 20:03:16.756968 1102136 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:16.756975 1102136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:16.757036 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:16.788732 1102136 cri.go:89] found id: "434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:16.788819 1102136 cri.go:89] found id: "cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:16.788832 1102136 cri.go:89] found id: ""
	I0717 20:03:16.788845 1102136 logs.go:284] 2 containers: [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379]
	I0717 20:03:16.788913 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.793783 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:16.797868 1102136 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:16.797892 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:16.813545 1102136 logs.go:123] Gathering logs for etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] ...
	I0717 20:03:16.813603 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:16.865094 1102136 logs.go:123] Gathering logs for coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] ...
	I0717 20:03:16.865144 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:16.904821 1102136 logs.go:123] Gathering logs for kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] ...
	I0717 20:03:16.904869 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:16.945822 1102136 logs.go:123] Gathering logs for kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] ...
	I0717 20:03:16.945865 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:16.986531 1102136 logs.go:123] Gathering logs for storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] ...
	I0717 20:03:16.986580 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:17.023216 1102136 logs.go:123] Gathering logs for storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] ...
	I0717 20:03:17.023253 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:17.062491 1102136 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:17.062532 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:17.137024 1102136 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:17.137085 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:17.292825 1102136 logs.go:123] Gathering logs for kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] ...
	I0717 20:03:17.292881 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:17.345470 1102136 logs.go:123] Gathering logs for kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] ...
	I0717 20:03:17.345519 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:17.401262 1102136 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:17.401326 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:18.037384 1102136 logs.go:123] Gathering logs for container status ...
	I0717 20:03:18.037440 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:15.422242 1102415 pod_ready.go:102] pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:17.011882 1102415 pod_ready.go:81] duration metric: took 4m0.000519116s waiting for pod "metrics-server-74d5c6b9c-hzcd7" in "kube-system" namespace to be "Ready" ...
	E0717 20:03:17.011940 1102415 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:03:17.011951 1102415 pod_ready.go:38] duration metric: took 4m2.40035739s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:03:17.011974 1102415 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:03:17.012009 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:17.012082 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:17.072352 1102415 cri.go:89] found id: "210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:17.072381 1102415 cri.go:89] found id: ""
	I0717 20:03:17.072396 1102415 logs.go:284] 1 containers: [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a]
	I0717 20:03:17.072467 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.078353 1102415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:17.078432 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:17.122416 1102415 cri.go:89] found id: "bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:17.122455 1102415 cri.go:89] found id: ""
	I0717 20:03:17.122466 1102415 logs.go:284] 1 containers: [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2]
	I0717 20:03:17.122539 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.128311 1102415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:17.128394 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:17.166606 1102415 cri.go:89] found id: "cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:17.166637 1102415 cri.go:89] found id: ""
	I0717 20:03:17.166653 1102415 logs.go:284] 1 containers: [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524]
	I0717 20:03:17.166720 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.172605 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:17.172693 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:17.221109 1102415 cri.go:89] found id: "9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:17.221138 1102415 cri.go:89] found id: ""
	I0717 20:03:17.221149 1102415 logs.go:284] 1 containers: [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261]
	I0717 20:03:17.221216 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.226305 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:17.226394 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:17.271876 1102415 cri.go:89] found id: "76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:17.271902 1102415 cri.go:89] found id: ""
	I0717 20:03:17.271911 1102415 logs.go:284] 1 containers: [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951]
	I0717 20:03:17.271979 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.281914 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:17.282016 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:17.319258 1102415 cri.go:89] found id: "280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:17.319288 1102415 cri.go:89] found id: ""
	I0717 20:03:17.319309 1102415 logs.go:284] 1 containers: [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6]
	I0717 20:03:17.319376 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.323955 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:17.324102 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:17.357316 1102415 cri.go:89] found id: ""
	I0717 20:03:17.357355 1102415 logs.go:284] 0 containers: []
	W0717 20:03:17.357367 1102415 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:17.357375 1102415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:17.357458 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:17.409455 1102415 cri.go:89] found id: "19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:17.409553 1102415 cri.go:89] found id: "4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:17.409613 1102415 cri.go:89] found id: ""
	I0717 20:03:17.409626 1102415 logs.go:284] 2 containers: [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88]
	I0717 20:03:17.409706 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.417046 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:17.428187 1102415 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:17.428242 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:17.504409 1102415 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:17.504454 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:17.673502 1102415 logs.go:123] Gathering logs for etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] ...
	I0717 20:03:17.673576 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:17.728765 1102415 logs.go:123] Gathering logs for kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] ...
	I0717 20:03:17.728818 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:17.791192 1102415 logs.go:123] Gathering logs for container status ...
	I0717 20:03:17.791249 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:17.844883 1102415 logs.go:123] Gathering logs for storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] ...
	I0717 20:03:17.844944 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:17.891456 1102415 logs.go:123] Gathering logs for storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] ...
	I0717 20:03:17.891501 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:17.927018 1102415 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:17.927057 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:18.493310 1102415 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:18.493362 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:18.510255 1102415 logs.go:123] Gathering logs for kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] ...
	I0717 20:03:18.510302 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:18.558006 1102415 logs.go:123] Gathering logs for coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] ...
	I0717 20:03:18.558054 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:18.595130 1102415 logs.go:123] Gathering logs for kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] ...
	I0717 20:03:18.595166 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:18.636909 1102415 logs.go:123] Gathering logs for kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] ...
	I0717 20:03:18.636967 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:16.460091 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:18.959764 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:17.395341 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:19.891916 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:20.585703 1102136 api_server.go:253] Checking apiserver healthz at https://192.168.61.65:8443/healthz ...
	I0717 20:03:20.591606 1102136 api_server.go:279] https://192.168.61.65:8443/healthz returned 200:
	ok
	I0717 20:03:20.593225 1102136 api_server.go:141] control plane version: v1.27.3
	I0717 20:03:20.593249 1102136 api_server.go:131] duration metric: took 4.122320377s to wait for apiserver health ...
	I0717 20:03:20.593259 1102136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:03:20.593297 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:20.593391 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:20.636361 1102136 cri.go:89] found id: "eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:20.636401 1102136 cri.go:89] found id: ""
	I0717 20:03:20.636413 1102136 logs.go:284] 1 containers: [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3]
	I0717 20:03:20.636488 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.641480 1102136 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:20.641622 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:20.674769 1102136 cri.go:89] found id: "4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:20.674791 1102136 cri.go:89] found id: ""
	I0717 20:03:20.674799 1102136 logs.go:284] 1 containers: [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc]
	I0717 20:03:20.674852 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.679515 1102136 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:20.679587 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:20.717867 1102136 cri.go:89] found id: "63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:20.717914 1102136 cri.go:89] found id: ""
	I0717 20:03:20.717927 1102136 logs.go:284] 1 containers: [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce]
	I0717 20:03:20.717997 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.723020 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:20.723106 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:20.759930 1102136 cri.go:89] found id: "0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:20.759957 1102136 cri.go:89] found id: ""
	I0717 20:03:20.759968 1102136 logs.go:284] 1 containers: [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5]
	I0717 20:03:20.760032 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.764308 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:20.764378 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:20.804542 1102136 cri.go:89] found id: "c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:20.804570 1102136 cri.go:89] found id: ""
	I0717 20:03:20.804580 1102136 logs.go:284] 1 containers: [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a]
	I0717 20:03:20.804654 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.810036 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:20.810133 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:20.846655 1102136 cri.go:89] found id: "2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:20.846681 1102136 cri.go:89] found id: ""
	I0717 20:03:20.846689 1102136 logs.go:284] 1 containers: [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9]
	I0717 20:03:20.846745 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.853633 1102136 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:20.853741 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:20.886359 1102136 cri.go:89] found id: ""
	I0717 20:03:20.886393 1102136 logs.go:284] 0 containers: []
	W0717 20:03:20.886405 1102136 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:20.886413 1102136 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:20.886489 1102136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:20.924476 1102136 cri.go:89] found id: "434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:20.924508 1102136 cri.go:89] found id: "cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:20.924513 1102136 cri.go:89] found id: ""
	I0717 20:03:20.924524 1102136 logs.go:284] 2 containers: [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379]
	I0717 20:03:20.924576 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.929775 1102136 ssh_runner.go:195] Run: which crictl
	I0717 20:03:20.935520 1102136 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:20.935547 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:21.543605 1102136 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:21.543668 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:21.694696 1102136 logs.go:123] Gathering logs for kube-scheduler [0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5] ...
	I0717 20:03:21.694763 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db29fec08ce97a2a0261e560781e435fe426efc8c04241433f8dbe91a0327f5"
	I0717 20:03:21.736092 1102136 logs.go:123] Gathering logs for kube-proxy [c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a] ...
	I0717 20:03:21.736150 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8746d568c4d09b85238f50450b7f8128df9bd447408c0a7aac3ea27682c247a"
	I0717 20:03:21.771701 1102136 logs.go:123] Gathering logs for kube-controller-manager [2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9] ...
	I0717 20:03:21.771749 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba1ed857458d6e2ae5dd46e1d1e40e3ac04d4ffba1dfc234ac12701d5eea7f9"
	I0717 20:03:21.822783 1102136 logs.go:123] Gathering logs for storage-provisioner [434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447] ...
	I0717 20:03:21.822835 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 434d3b3c5d986062c640c19ad3ecdbd00ce82807fee7305714628dd54db03447"
	I0717 20:03:21.885797 1102136 logs.go:123] Gathering logs for storage-provisioner [cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379] ...
	I0717 20:03:21.885851 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb2ddc8935dcdea4c915ccd14fe8d9dbedfd7db37b57f6ee221889e2ad174379"
	I0717 20:03:21.930801 1102136 logs.go:123] Gathering logs for container status ...
	I0717 20:03:21.930842 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:21.985829 1102136 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:21.985862 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:22.056958 1102136 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:22.057010 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:22.074352 1102136 logs.go:123] Gathering logs for kube-apiserver [eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3] ...
	I0717 20:03:22.074402 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eec27ef53d6bcdccbdf2d6720167efb303c0de316f6f8572821374770035c2a3"
	I0717 20:03:22.128386 1102136 logs.go:123] Gathering logs for etcd [4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc] ...
	I0717 20:03:22.128437 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a90287e5fc1674e8a13cf676c642d55f4cfe5f4820ff695b3183835c86017fc"
	I0717 20:03:22.188390 1102136 logs.go:123] Gathering logs for coredns [63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce] ...
	I0717 20:03:22.188425 1102136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63dc2a3f8ace585850b7f87ad50c96132f0b6c29f0af3de48371b8ea389513ce"
	I0717 20:03:21.172413 1102415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:03:21.194614 1102415 api_server.go:72] duration metric: took 4m13.166163785s to wait for apiserver process to appear ...
	I0717 20:03:21.194645 1102415 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:03:21.194687 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:21.194748 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:21.229142 1102415 cri.go:89] found id: "210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:21.229176 1102415 cri.go:89] found id: ""
	I0717 20:03:21.229186 1102415 logs.go:284] 1 containers: [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a]
	I0717 20:03:21.229255 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.234039 1102415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:21.234106 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:21.266482 1102415 cri.go:89] found id: "bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:21.266516 1102415 cri.go:89] found id: ""
	I0717 20:03:21.266527 1102415 logs.go:284] 1 containers: [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2]
	I0717 20:03:21.266596 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.271909 1102415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:21.271992 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:21.309830 1102415 cri.go:89] found id: "cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:21.309869 1102415 cri.go:89] found id: ""
	I0717 20:03:21.309878 1102415 logs.go:284] 1 containers: [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524]
	I0717 20:03:21.309943 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.314757 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:21.314838 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:21.356650 1102415 cri.go:89] found id: "9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:21.356681 1102415 cri.go:89] found id: ""
	I0717 20:03:21.356691 1102415 logs.go:284] 1 containers: [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261]
	I0717 20:03:21.356748 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.361582 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:21.361667 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:21.394956 1102415 cri.go:89] found id: "76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:21.394982 1102415 cri.go:89] found id: ""
	I0717 20:03:21.394994 1102415 logs.go:284] 1 containers: [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951]
	I0717 20:03:21.395056 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.400073 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:21.400143 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:21.441971 1102415 cri.go:89] found id: "280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:21.442004 1102415 cri.go:89] found id: ""
	I0717 20:03:21.442015 1102415 logs.go:284] 1 containers: [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6]
	I0717 20:03:21.442083 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.447189 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:21.447253 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:21.479477 1102415 cri.go:89] found id: ""
	I0717 20:03:21.479512 1102415 logs.go:284] 0 containers: []
	W0717 20:03:21.479524 1102415 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:21.479534 1102415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:21.479615 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:21.515474 1102415 cri.go:89] found id: "19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:21.515502 1102415 cri.go:89] found id: "4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:21.515510 1102415 cri.go:89] found id: ""
	I0717 20:03:21.515521 1102415 logs.go:284] 2 containers: [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88]
	I0717 20:03:21.515583 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.520398 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:21.525414 1102415 logs.go:123] Gathering logs for kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] ...
	I0717 20:03:21.525450 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:21.564455 1102415 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:21.564492 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:21.628081 1102415 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:21.628127 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:21.646464 1102415 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:21.646508 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:21.803148 1102415 logs.go:123] Gathering logs for kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] ...
	I0717 20:03:21.803205 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:21.856704 1102415 logs.go:123] Gathering logs for etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] ...
	I0717 20:03:21.856765 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:21.907860 1102415 logs.go:123] Gathering logs for coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] ...
	I0717 20:03:21.907912 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:21.953111 1102415 logs.go:123] Gathering logs for kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] ...
	I0717 20:03:21.953158 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:21.999947 1102415 logs.go:123] Gathering logs for kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] ...
	I0717 20:03:22.000008 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:22.061041 1102415 logs.go:123] Gathering logs for storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] ...
	I0717 20:03:22.061078 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:22.103398 1102415 logs.go:123] Gathering logs for storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] ...
	I0717 20:03:22.103432 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:22.141810 1102415 logs.go:123] Gathering logs for container status ...
	I0717 20:03:22.141864 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:22.186692 1102415 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:22.186726 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:24.737179 1102136 system_pods.go:59] 8 kube-system pods found
	I0717 20:03:24.737218 1102136 system_pods.go:61] "coredns-5d78c9869d-9mxdj" [fbff09fd-436d-4208-9187-b6312aa1c223] Running
	I0717 20:03:24.737225 1102136 system_pods.go:61] "etcd-no-preload-408472" [7125f19d-c1ed-4b1f-99be-207bbf5d8c70] Running
	I0717 20:03:24.737231 1102136 system_pods.go:61] "kube-apiserver-no-preload-408472" [0e54eaed-daad-434a-ad3b-96f7fb924099] Running
	I0717 20:03:24.737238 1102136 system_pods.go:61] "kube-controller-manager-no-preload-408472" [38ee8079-8142-45a9-8d5a-4abbc5c8bb3b] Running
	I0717 20:03:24.737243 1102136 system_pods.go:61] "kube-proxy-cntdn" [8653567b-abf9-468c-a030-45fc53fa0cc2] Running
	I0717 20:03:24.737248 1102136 system_pods.go:61] "kube-scheduler-no-preload-408472" [e51560a1-c1b0-407c-8635-df512bd033b5] Running
	I0717 20:03:24.737258 1102136 system_pods.go:61] "metrics-server-74d5c6b9c-hnngh" [dfff837e-dbba-4795-935d-9562d2744169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:24.737269 1102136 system_pods.go:61] "storage-provisioner" [1aefd8ef-dec9-4e37-8648-8e5a62622cd3] Running
	I0717 20:03:24.737278 1102136 system_pods.go:74] duration metric: took 4.144012317s to wait for pod list to return data ...
	I0717 20:03:24.737290 1102136 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:03:24.741216 1102136 default_sa.go:45] found service account: "default"
	I0717 20:03:24.741262 1102136 default_sa.go:55] duration metric: took 3.961044ms for default service account to be created ...
	I0717 20:03:24.741275 1102136 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:03:24.749060 1102136 system_pods.go:86] 8 kube-system pods found
	I0717 20:03:24.749094 1102136 system_pods.go:89] "coredns-5d78c9869d-9mxdj" [fbff09fd-436d-4208-9187-b6312aa1c223] Running
	I0717 20:03:24.749100 1102136 system_pods.go:89] "etcd-no-preload-408472" [7125f19d-c1ed-4b1f-99be-207bbf5d8c70] Running
	I0717 20:03:24.749104 1102136 system_pods.go:89] "kube-apiserver-no-preload-408472" [0e54eaed-daad-434a-ad3b-96f7fb924099] Running
	I0717 20:03:24.749109 1102136 system_pods.go:89] "kube-controller-manager-no-preload-408472" [38ee8079-8142-45a9-8d5a-4abbc5c8bb3b] Running
	I0717 20:03:24.749113 1102136 system_pods.go:89] "kube-proxy-cntdn" [8653567b-abf9-468c-a030-45fc53fa0cc2] Running
	I0717 20:03:24.749117 1102136 system_pods.go:89] "kube-scheduler-no-preload-408472" [e51560a1-c1b0-407c-8635-df512bd033b5] Running
	I0717 20:03:24.749125 1102136 system_pods.go:89] "metrics-server-74d5c6b9c-hnngh" [dfff837e-dbba-4795-935d-9562d2744169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:24.749139 1102136 system_pods.go:89] "storage-provisioner" [1aefd8ef-dec9-4e37-8648-8e5a62622cd3] Running
	I0717 20:03:24.749147 1102136 system_pods.go:126] duration metric: took 7.865246ms to wait for k8s-apps to be running ...
	I0717 20:03:24.749155 1102136 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:03:24.749215 1102136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:03:24.765460 1102136 system_svc.go:56] duration metric: took 16.294048ms WaitForService to wait for kubelet.
	I0717 20:03:24.765503 1102136 kubeadm.go:581] duration metric: took 4m23.529814054s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:03:24.765587 1102136 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:03:24.769332 1102136 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:03:24.769368 1102136 node_conditions.go:123] node cpu capacity is 2
	I0717 20:03:24.769381 1102136 node_conditions.go:105] duration metric: took 3.788611ms to run NodePressure ...
	I0717 20:03:24.769392 1102136 start.go:228] waiting for startup goroutines ...
	I0717 20:03:24.769397 1102136 start.go:233] waiting for cluster config update ...
	I0717 20:03:24.769408 1102136 start.go:242] writing updated cluster config ...
	I0717 20:03:24.769830 1102136 ssh_runner.go:195] Run: rm -f paused
	I0717 20:03:24.827845 1102136 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:03:24.830624 1102136 out.go:177] * Done! kubectl is now configured to use "no-preload-408472" cluster and "default" namespace by default
	I0717 20:03:20.960575 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:23.458710 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:25.465429 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:21.893446 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:24.393335 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:26.393858 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:25.243410 1102415 api_server.go:253] Checking apiserver healthz at https://192.168.72.51:8444/healthz ...
	I0717 20:03:25.250670 1102415 api_server.go:279] https://192.168.72.51:8444/healthz returned 200:
	ok
	I0717 20:03:25.252086 1102415 api_server.go:141] control plane version: v1.27.3
	I0717 20:03:25.252111 1102415 api_server.go:131] duration metric: took 4.0574608s to wait for apiserver health ...
	I0717 20:03:25.252121 1102415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:03:25.252146 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:03:25.252197 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:03:25.286754 1102415 cri.go:89] found id: "210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:25.286785 1102415 cri.go:89] found id: ""
	I0717 20:03:25.286795 1102415 logs.go:284] 1 containers: [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a]
	I0717 20:03:25.286867 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.292653 1102415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:03:25.292733 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:03:25.328064 1102415 cri.go:89] found id: "bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:25.328092 1102415 cri.go:89] found id: ""
	I0717 20:03:25.328101 1102415 logs.go:284] 1 containers: [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2]
	I0717 20:03:25.328170 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.333727 1102415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:03:25.333798 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:03:25.368132 1102415 cri.go:89] found id: "cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:25.368159 1102415 cri.go:89] found id: ""
	I0717 20:03:25.368167 1102415 logs.go:284] 1 containers: [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524]
	I0717 20:03:25.368245 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.373091 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:03:25.373197 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:03:25.414136 1102415 cri.go:89] found id: "9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:25.414165 1102415 cri.go:89] found id: ""
	I0717 20:03:25.414175 1102415 logs.go:284] 1 containers: [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261]
	I0717 20:03:25.414229 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.424603 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:03:25.424679 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:03:25.470289 1102415 cri.go:89] found id: "76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:25.470320 1102415 cri.go:89] found id: ""
	I0717 20:03:25.470331 1102415 logs.go:284] 1 containers: [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951]
	I0717 20:03:25.470401 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.476760 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:03:25.476851 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:03:25.511350 1102415 cri.go:89] found id: "280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:25.511379 1102415 cri.go:89] found id: ""
	I0717 20:03:25.511390 1102415 logs.go:284] 1 containers: [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6]
	I0717 20:03:25.511459 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.516259 1102415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:03:25.516339 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:03:25.553868 1102415 cri.go:89] found id: ""
	I0717 20:03:25.553913 1102415 logs.go:284] 0 containers: []
	W0717 20:03:25.553925 1102415 logs.go:286] No container was found matching "kindnet"
	I0717 20:03:25.553932 1102415 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:03:25.554025 1102415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:03:25.589810 1102415 cri.go:89] found id: "19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:25.589844 1102415 cri.go:89] found id: "4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:25.589851 1102415 cri.go:89] found id: ""
	I0717 20:03:25.589862 1102415 logs.go:284] 2 containers: [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88]
	I0717 20:03:25.589924 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.594968 1102415 ssh_runner.go:195] Run: which crictl
	I0717 20:03:25.598953 1102415 logs.go:123] Gathering logs for kube-scheduler [9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261] ...
	I0717 20:03:25.598977 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9790a6abc465805e768a50d133e32aa14e2bb9a37c4cdb5ecdf74cb67cf0c261"
	I0717 20:03:25.640632 1102415 logs.go:123] Gathering logs for kube-controller-manager [280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6] ...
	I0717 20:03:25.640678 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 280d9b31ea5e887253aba8ab2847d66c5500e746a67a8fcbfdcd76cf63bbb5c6"
	I0717 20:03:25.692768 1102415 logs.go:123] Gathering logs for storage-provisioner [19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064] ...
	I0717 20:03:25.692812 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f50eeeb11e7ce78442eed1eff0d7a3ee08302d49331e13c6d9d32f9a0cb064"
	I0717 20:03:25.728461 1102415 logs.go:123] Gathering logs for container status ...
	I0717 20:03:25.728500 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:03:25.779239 1102415 logs.go:123] Gathering logs for dmesg ...
	I0717 20:03:25.779278 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:03:25.794738 1102415 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:03:25.794790 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:03:25.966972 1102415 logs.go:123] Gathering logs for etcd [bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2] ...
	I0717 20:03:25.967016 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb86b8e5369c26697b7c73235cc7d20ff0426c8a01a48c832128e6fb4b251df2"
	I0717 20:03:26.017430 1102415 logs.go:123] Gathering logs for coredns [cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524] ...
	I0717 20:03:26.017467 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb8cdd2d3f50b927213201a437bf8bfc011c40da0454103349f53ca3eb5de524"
	I0717 20:03:26.053983 1102415 logs.go:123] Gathering logs for kube-proxy [76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951] ...
	I0717 20:03:26.054017 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76ea7912be2a5aaf9106a2a8377c34bd2fb21ec33d4595dc55a70eeb9d9f3951"
	I0717 20:03:26.092510 1102415 logs.go:123] Gathering logs for storage-provisioner [4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88] ...
	I0717 20:03:26.092544 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a4713278724336101c4d87963030b6df239277ec70aee918da7bd9a34989c88"
	I0717 20:03:26.127038 1102415 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:03:26.127071 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:03:26.728858 1102415 logs.go:123] Gathering logs for kubelet ...
	I0717 20:03:26.728911 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:03:26.792099 1102415 logs.go:123] Gathering logs for kube-apiserver [210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a] ...
	I0717 20:03:26.792146 1102415 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 210ff04a86d98eb6b3e8561f52828368361d00fdc70b85d800b8d8c740da857a"
	I0717 20:03:29.360633 1102415 system_pods.go:59] 8 kube-system pods found
	I0717 20:03:29.360678 1102415 system_pods.go:61] "coredns-5d78c9869d-rjqsv" [f27e2de9-9849-40e2-b6dc-1ee27537b1e6] Running
	I0717 20:03:29.360686 1102415 system_pods.go:61] "etcd-default-k8s-diff-port-711413" [15f74e3f-61a1-4464-bbff-d336f6df4b6e] Running
	I0717 20:03:29.360694 1102415 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-711413" [c164ab32-6a1d-4079-9b58-7da96eabc60e] Running
	I0717 20:03:29.360701 1102415 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-711413" [7dc149c8-be03-4fd6-b945-c63e95a28470] Running
	I0717 20:03:29.360708 1102415 system_pods.go:61] "kube-proxy-9qfpg" [ecb84bb9-57a2-4a42-8104-b792d38479ca] Running
	I0717 20:03:29.360714 1102415 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-711413" [f2cb374f-674e-4a08-82e0-0932b732b485] Running
	I0717 20:03:29.360727 1102415 system_pods.go:61] "metrics-server-74d5c6b9c-hzcd7" [17e01399-9910-4f01-abe7-3eae271af1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:29.360745 1102415 system_pods.go:61] "storage-provisioner" [43705714-97f1-4b06-8eeb-04d60c22112a] Running
	I0717 20:03:29.360755 1102415 system_pods.go:74] duration metric: took 4.108627852s to wait for pod list to return data ...
	I0717 20:03:29.360764 1102415 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:03:29.364887 1102415 default_sa.go:45] found service account: "default"
	I0717 20:03:29.364918 1102415 default_sa.go:55] duration metric: took 4.142278ms for default service account to be created ...
	I0717 20:03:29.364927 1102415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:03:29.372734 1102415 system_pods.go:86] 8 kube-system pods found
	I0717 20:03:29.372774 1102415 system_pods.go:89] "coredns-5d78c9869d-rjqsv" [f27e2de9-9849-40e2-b6dc-1ee27537b1e6] Running
	I0717 20:03:29.372783 1102415 system_pods.go:89] "etcd-default-k8s-diff-port-711413" [15f74e3f-61a1-4464-bbff-d336f6df4b6e] Running
	I0717 20:03:29.372791 1102415 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-711413" [c164ab32-6a1d-4079-9b58-7da96eabc60e] Running
	I0717 20:03:29.372799 1102415 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-711413" [7dc149c8-be03-4fd6-b945-c63e95a28470] Running
	I0717 20:03:29.372806 1102415 system_pods.go:89] "kube-proxy-9qfpg" [ecb84bb9-57a2-4a42-8104-b792d38479ca] Running
	I0717 20:03:29.372813 1102415 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-711413" [f2cb374f-674e-4a08-82e0-0932b732b485] Running
	I0717 20:03:29.372824 1102415 system_pods.go:89] "metrics-server-74d5c6b9c-hzcd7" [17e01399-9910-4f01-abe7-3eae271af1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:03:29.372832 1102415 system_pods.go:89] "storage-provisioner" [43705714-97f1-4b06-8eeb-04d60c22112a] Running
	I0717 20:03:29.372843 1102415 system_pods.go:126] duration metric: took 7.908204ms to wait for k8s-apps to be running ...
	I0717 20:03:29.372857 1102415 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:03:29.372916 1102415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:03:29.393783 1102415 system_svc.go:56] duration metric: took 20.914205ms WaitForService to wait for kubelet.
	I0717 20:03:29.393821 1102415 kubeadm.go:581] duration metric: took 4m21.365424408s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:03:29.393853 1102415 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:03:29.398018 1102415 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:03:29.398052 1102415 node_conditions.go:123] node cpu capacity is 2
	I0717 20:03:29.398064 1102415 node_conditions.go:105] duration metric: took 4.205596ms to run NodePressure ...
	I0717 20:03:29.398076 1102415 start.go:228] waiting for startup goroutines ...
	I0717 20:03:29.398082 1102415 start.go:233] waiting for cluster config update ...
	I0717 20:03:29.398102 1102415 start.go:242] writing updated cluster config ...
	I0717 20:03:29.398468 1102415 ssh_runner.go:195] Run: rm -f paused
	I0717 20:03:29.454497 1102415 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:03:29.457512 1102415 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-711413" cluster and "default" namespace by default
	I0717 20:03:27.959261 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:30.460004 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:28.394465 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:30.892361 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:32.957801 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:34.958305 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:32.892903 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:35.392748 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:36.958526 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:38.958779 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:37.393705 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:39.892551 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:41.458525 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:42.402712 1103141 pod_ready.go:81] duration metric: took 4m0.00015085s waiting for pod "metrics-server-74d5c6b9c-pshr5" in "kube-system" namespace to be "Ready" ...
	E0717 20:03:42.402748 1103141 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:03:42.402774 1103141 pod_ready.go:38] duration metric: took 4m10.235484044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:03:42.402819 1103141 kubeadm.go:640] restartCluster took 4m30.682189828s
	W0717 20:03:42.402887 1103141 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 20:03:42.402946 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 20:03:42.393799 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:44.394199 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:46.892897 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:48.895295 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:51.394267 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:53.894027 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:56.393652 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:03:58.896895 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:01.393396 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:03.892923 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:05.894423 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:08.394591 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:10.893136 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:14.851948 1103141 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.44897498s)
	I0717 20:04:14.852044 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:04:14.868887 1103141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:04:14.879707 1103141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:04:14.890657 1103141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:04:14.890724 1103141 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 20:04:14.961576 1103141 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 20:04:14.961661 1103141 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:04:15.128684 1103141 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:04:15.128835 1103141 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:04:15.128966 1103141 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:04:15.334042 1103141 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:04:15.336736 1103141 out.go:204]   - Generating certificates and keys ...
	I0717 20:04:15.336885 1103141 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:04:15.336966 1103141 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:04:15.337097 1103141 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 20:04:15.337201 1103141 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 20:04:15.337312 1103141 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 20:04:15.337393 1103141 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 20:04:15.337769 1103141 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 20:04:15.338490 1103141 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 20:04:15.338931 1103141 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 20:04:15.339490 1103141 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 20:04:15.339994 1103141 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 20:04:15.340076 1103141 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:04:15.714920 1103141 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:04:15.892169 1103141 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:04:16.203610 1103141 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:04:16.346085 1103141 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:04:16.364315 1103141 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:04:16.365521 1103141 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:04:16.366077 1103141 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 20:04:16.503053 1103141 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:04:13.393067 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:15.394199 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:16.505772 1103141 out.go:204]   - Booting up control plane ...
	I0717 20:04:16.505925 1103141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:04:16.506056 1103141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:04:16.511321 1103141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:04:16.513220 1103141 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:04:16.516069 1103141 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 20:04:17.892626 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:19.893760 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:25.520496 1103141 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003077 seconds
	I0717 20:04:25.520676 1103141 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 20:04:25.541790 1103141 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 20:04:26.093172 1103141 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 20:04:26.093446 1103141 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-114855 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 20:04:26.614680 1103141 kubeadm.go:322] [bootstrap-token] Using token: nbkipc.s1xu11jkn2pd9jvz
	I0717 20:04:22.393296 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:24.395001 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:26.617034 1103141 out.go:204]   - Configuring RBAC rules ...
	I0717 20:04:26.617210 1103141 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 20:04:26.625795 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 20:04:26.645311 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 20:04:26.650977 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 20:04:26.656523 1103141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 20:04:26.662996 1103141 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 20:04:26.691726 1103141 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 20:04:26.969700 1103141 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 20:04:27.038459 1103141 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 20:04:27.039601 1103141 kubeadm.go:322] 
	I0717 20:04:27.039723 1103141 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 20:04:27.039753 1103141 kubeadm.go:322] 
	I0717 20:04:27.039848 1103141 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 20:04:27.039857 1103141 kubeadm.go:322] 
	I0717 20:04:27.039879 1103141 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 20:04:27.039945 1103141 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 20:04:27.040023 1103141 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 20:04:27.040036 1103141 kubeadm.go:322] 
	I0717 20:04:27.040114 1103141 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 20:04:27.040123 1103141 kubeadm.go:322] 
	I0717 20:04:27.040192 1103141 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 20:04:27.040202 1103141 kubeadm.go:322] 
	I0717 20:04:27.040302 1103141 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 20:04:27.040419 1103141 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 20:04:27.040533 1103141 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 20:04:27.040543 1103141 kubeadm.go:322] 
	I0717 20:04:27.040653 1103141 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 20:04:27.040780 1103141 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 20:04:27.040792 1103141 kubeadm.go:322] 
	I0717 20:04:27.040917 1103141 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nbkipc.s1xu11jkn2pd9jvz \
	I0717 20:04:27.041051 1103141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 20:04:27.041083 1103141 kubeadm.go:322] 	--control-plane 
	I0717 20:04:27.041093 1103141 kubeadm.go:322] 
	I0717 20:04:27.041196 1103141 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 20:04:27.041200 1103141 kubeadm.go:322] 
	I0717 20:04:27.041276 1103141 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nbkipc.s1xu11jkn2pd9jvz \
	I0717 20:04:27.041420 1103141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 20:04:27.042440 1103141 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 20:04:27.042466 1103141 cni.go:84] Creating CNI manager for ""
	I0717 20:04:27.042512 1103141 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 20:04:27.046805 1103141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 20:04:27.049084 1103141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 20:04:27.115952 1103141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 20:04:27.155521 1103141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 20:04:27.155614 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:27.155620 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=embed-certs-114855 minikube.k8s.io/updated_at=2023_07_17T20_04_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:27.604520 1103141 ops.go:34] apiserver oom_adj: -16
	I0717 20:04:27.604687 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:28.204384 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:28.703799 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:29.203981 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:29.703475 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:30.204062 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:30.703323 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:26.892819 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:28.895201 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:31.393384 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:31.204070 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:31.704206 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:32.204069 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:32.704193 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:33.203936 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:33.703692 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:34.203584 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:34.704039 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:35.204118 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:35.703385 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:33.893262 1101908 pod_ready.go:102] pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:33.985163 1101908 pod_ready.go:81] duration metric: took 4m0.000422638s waiting for pod "metrics-server-74d5856cc6-pjjtx" in "kube-system" namespace to be "Ready" ...
	E0717 20:04:33.985205 1101908 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:04:33.985241 1101908 pod_ready.go:38] duration metric: took 4m1.200649003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:04:33.985298 1101908 kubeadm.go:640] restartCluster took 4m55.488257482s
	W0717 20:04:33.985385 1101908 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 20:04:33.985432 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 20:04:36.203827 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:36.703377 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:37.203981 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:37.703376 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:38.203498 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:38.703751 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:39.204099 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:39.704172 1103141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:04:39.830734 1103141 kubeadm.go:1081] duration metric: took 12.675193605s to wait for elevateKubeSystemPrivileges.
	I0717 20:04:39.830771 1103141 kubeadm.go:406] StartCluster complete in 5m28.184955104s
	I0717 20:04:39.830796 1103141 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:04:39.830918 1103141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:04:39.833157 1103141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:04:39.834602 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 20:04:39.834801 1103141 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:04:39.834815 1103141 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 20:04:39.835031 1103141 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-114855"
	I0717 20:04:39.835054 1103141 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-114855"
	W0717 20:04:39.835062 1103141 addons.go:240] addon storage-provisioner should already be in state true
	I0717 20:04:39.835120 1103141 host.go:66] Checking if "embed-certs-114855" exists ...
	I0717 20:04:39.835243 1103141 addons.go:69] Setting default-storageclass=true in profile "embed-certs-114855"
	I0717 20:04:39.835240 1103141 addons.go:69] Setting metrics-server=true in profile "embed-certs-114855"
	I0717 20:04:39.835265 1103141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-114855"
	I0717 20:04:39.835268 1103141 addons.go:231] Setting addon metrics-server=true in "embed-certs-114855"
	W0717 20:04:39.835277 1103141 addons.go:240] addon metrics-server should already be in state true
	I0717 20:04:39.835324 1103141 host.go:66] Checking if "embed-certs-114855" exists ...
	I0717 20:04:39.835732 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.835742 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.835801 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.835831 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.835799 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.835916 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.855470 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0717 20:04:39.855482 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35595
	I0717 20:04:39.855481 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0717 20:04:39.856035 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.856107 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.856127 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.856776 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.856802 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.856872 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.856886 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.856937 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.856967 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.857216 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.857328 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.857353 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.857979 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.858022 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.858249 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.858296 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.858559 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.868852 1103141 addons.go:231] Setting addon default-storageclass=true in "embed-certs-114855"
	W0717 20:04:39.868889 1103141 addons.go:240] addon default-storageclass should already be in state true
	I0717 20:04:39.868930 1103141 host.go:66] Checking if "embed-certs-114855" exists ...
	I0717 20:04:39.869376 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.869426 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.877028 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37179
	I0717 20:04:39.877916 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.878347 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
	I0717 20:04:39.878690 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.878713 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.879085 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.879732 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.879754 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.879765 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.879950 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.880175 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.880381 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.882729 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 20:04:39.885818 1103141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:04:39.883284 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 20:04:39.888145 1103141 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:04:39.888171 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 20:04:39.888202 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 20:04:39.891651 1103141 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 20:04:39.893769 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 20:04:39.893066 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.893799 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 20:04:39.893831 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 20:04:39.893840 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 20:04:39.893879 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.894206 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 20:04:39.894454 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 20:04:39.894689 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 20:04:39.894878 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 20:04:39.895562 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0717 20:04:39.896172 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.896799 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.896825 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.897316 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.897969 1103141 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:04:39.898007 1103141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:04:39.898778 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.899616 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 20:04:39.899645 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.899895 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 20:04:39.900193 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 20:04:39.900575 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 20:04:39.900770 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 20:04:39.915966 1103141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I0717 20:04:39.916539 1103141 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:04:39.917101 1103141 main.go:141] libmachine: Using API Version  1
	I0717 20:04:39.917123 1103141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:04:39.917530 1103141 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:04:39.917816 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetState
	I0717 20:04:39.919631 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .DriverName
	I0717 20:04:39.919916 1103141 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 20:04:39.919936 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 20:04:39.919957 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHHostname
	I0717 20:04:39.926132 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.926487 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:57:9a", ip: ""} in network mk-embed-certs-114855: {Iface:virbr3 ExpiryTime:2023-07-17 20:58:51 +0000 UTC Type:0 Mac:52:54:00:d6:57:9a Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:embed-certs-114855 Clientid:01:52:54:00:d6:57:9a}
	I0717 20:04:39.926520 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | domain embed-certs-114855 has defined IP address 192.168.39.213 and MAC address 52:54:00:d6:57:9a in network mk-embed-certs-114855
	I0717 20:04:39.926779 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHPort
	I0717 20:04:39.927115 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHKeyPath
	I0717 20:04:39.927327 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .GetSSHUsername
	I0717 20:04:39.927522 1103141 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/embed-certs-114855/id_rsa Username:docker}
	I0717 20:04:40.077079 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 20:04:40.077106 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 20:04:40.084344 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 20:04:40.114809 1103141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:04:40.123795 1103141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 20:04:40.149950 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 20:04:40.149977 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 20:04:40.222818 1103141 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:04:40.222855 1103141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 20:04:40.290773 1103141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:04:40.464132 1103141 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-114855" context rescaled to 1 replicas
	I0717 20:04:40.464182 1103141 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:04:40.468285 1103141 out.go:177] * Verifying Kubernetes components...
	I0717 20:04:40.470824 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:04:42.565704 1103141 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.481305344s)
	I0717 20:04:42.565749 1103141 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 20:04:43.290667 1103141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.175803142s)
	I0717 20:04:43.290744 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.290759 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.290778 1103141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.166947219s)
	I0717 20:04:43.290822 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.290840 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.291087 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.291217 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291225 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291238 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291241 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291254 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.291261 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.291268 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.291272 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.291613 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.291662 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291671 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291732 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.291756 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.291764 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.291775 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.291784 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.292436 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.292456 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.292471 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.439222 1103141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.148389848s)
	I0717 20:04:43.439268 1103141 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.968393184s)
	I0717 20:04:43.439310 1103141 node_ready.go:35] waiting up to 6m0s for node "embed-certs-114855" to be "Ready" ...
	I0717 20:04:43.439357 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.439401 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.439784 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.439806 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.439863 1103141 main.go:141] libmachine: Making call to close driver server
	I0717 20:04:43.439932 1103141 main.go:141] libmachine: (embed-certs-114855) Calling .Close
	I0717 20:04:43.440202 1103141 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:04:43.440220 1103141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:04:43.440226 1103141 main.go:141] libmachine: (embed-certs-114855) DBG | Closing plugin on server side
	I0717 20:04:43.440232 1103141 addons.go:467] Verifying addon metrics-server=true in "embed-certs-114855"
	I0717 20:04:43.443066 1103141 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 20:04:43.445240 1103141 addons.go:502] enable addons completed in 3.610419127s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 20:04:43.494952 1103141 node_ready.go:49] node "embed-certs-114855" has status "Ready":"True"
	I0717 20:04:43.495002 1103141 node_ready.go:38] duration metric: took 55.676022ms waiting for node "embed-certs-114855" to be "Ready" ...
	I0717 20:04:43.495017 1103141 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:04:43.579632 1103141 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-9dkzb" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.676633 1103141 pod_ready.go:92] pod "coredns-5d78c9869d-9dkzb" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.676664 1103141 pod_ready.go:81] duration metric: took 1.096981736s waiting for pod "coredns-5d78c9869d-9dkzb" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.676677 1103141 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-gq2b2" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.683019 1103141 pod_ready.go:92] pod "coredns-5d78c9869d-gq2b2" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.683061 1103141 pod_ready.go:81] duration metric: took 6.376086ms waiting for pod "coredns-5d78c9869d-gq2b2" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.683077 1103141 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.691140 1103141 pod_ready.go:92] pod "etcd-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.691166 1103141 pod_ready.go:81] duration metric: took 8.082867ms waiting for pod "etcd-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.691180 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.713413 1103141 pod_ready.go:92] pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.713448 1103141 pod_ready.go:81] duration metric: took 22.261351ms waiting for pod "kube-apiserver-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.713462 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.728761 1103141 pod_ready.go:92] pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:44.728797 1103141 pod_ready.go:81] duration metric: took 15.326363ms waiting for pod "kube-controller-manager-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:44.728813 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bfvnl" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.583863 1103141 pod_ready.go:92] pod "kube-proxy-bfvnl" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:45.583901 1103141 pod_ready.go:81] duration metric: took 855.078548ms waiting for pod "kube-proxy-bfvnl" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.583915 1103141 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.867684 1103141 pod_ready.go:92] pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace has status "Ready":"True"
	I0717 20:04:45.867719 1103141 pod_ready.go:81] duration metric: took 283.796193ms waiting for pod "kube-scheduler-embed-certs-114855" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:45.867735 1103141 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace to be "Ready" ...
	I0717 20:04:48.274479 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:50.278380 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:52.775046 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:54.775545 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:56.776685 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:59.275966 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:04:57.110722 1101908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (23.125251743s)
	I0717 20:04:57.110813 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:04:57.124991 1101908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:04:57.136828 1101908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:04:57.146898 1101908 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:04:57.146965 1101908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0717 20:04:57.390116 1101908 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 20:05:01.281623 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:03.776009 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:10.335351 1101908 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 20:05:10.335447 1101908 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:05:10.335566 1101908 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:05:10.335703 1101908 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:05:10.335829 1101908 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:05:10.335949 1101908 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:05:10.336064 1101908 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:05:10.336135 1101908 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 20:05:10.336220 1101908 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:05:10.338257 1101908 out.go:204]   - Generating certificates and keys ...
	I0717 20:05:10.338354 1101908 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:05:10.338443 1101908 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:05:10.338558 1101908 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 20:05:10.338681 1101908 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 20:05:10.338792 1101908 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 20:05:10.338855 1101908 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 20:05:10.338950 1101908 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 20:05:10.339044 1101908 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 20:05:10.339160 1101908 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 20:05:10.339264 1101908 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 20:05:10.339326 1101908 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 20:05:10.339403 1101908 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:05:10.339477 1101908 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:05:10.339556 1101908 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:05:10.339650 1101908 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:05:10.339727 1101908 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:05:10.339820 1101908 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:05:10.341550 1101908 out.go:204]   - Booting up control plane ...
	I0717 20:05:10.341674 1101908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:05:10.341797 1101908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:05:10.341892 1101908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:05:10.341982 1101908 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:05:10.342180 1101908 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 20:05:10.342290 1101908 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.005656 seconds
	I0717 20:05:10.342399 1101908 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 20:05:10.342515 1101908 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 20:05:10.342582 1101908 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 20:05:10.342742 1101908 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-149000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 20:05:10.342830 1101908 kubeadm.go:322] [bootstrap-token] Using token: ki6f1y.fknzxf03oj84iyat
	I0717 20:05:10.344845 1101908 out.go:204]   - Configuring RBAC rules ...
	I0717 20:05:10.344980 1101908 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 20:05:10.345153 1101908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 20:05:10.345318 1101908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 20:05:10.345473 1101908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 20:05:10.345600 1101908 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 20:05:10.345664 1101908 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 20:05:10.345739 1101908 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 20:05:10.345750 1101908 kubeadm.go:322] 
	I0717 20:05:10.345834 1101908 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 20:05:10.345843 1101908 kubeadm.go:322] 
	I0717 20:05:10.345939 1101908 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 20:05:10.345947 1101908 kubeadm.go:322] 
	I0717 20:05:10.345983 1101908 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 20:05:10.346067 1101908 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 20:05:10.346139 1101908 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 20:05:10.346148 1101908 kubeadm.go:322] 
	I0717 20:05:10.346248 1101908 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 20:05:10.346356 1101908 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 20:05:10.346470 1101908 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 20:05:10.346480 1101908 kubeadm.go:322] 
	I0717 20:05:10.346588 1101908 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0717 20:05:10.346686 1101908 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 20:05:10.346695 1101908 kubeadm.go:322] 
	I0717 20:05:10.346821 1101908 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ki6f1y.fknzxf03oj84iyat \
	I0717 20:05:10.346997 1101908 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc \
	I0717 20:05:10.347033 1101908 kubeadm.go:322]     --control-plane 	  
	I0717 20:05:10.347042 1101908 kubeadm.go:322] 
	I0717 20:05:10.347152 1101908 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 20:05:10.347161 1101908 kubeadm.go:322] 
	I0717 20:05:10.347260 1101908 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ki6f1y.fknzxf03oj84iyat \
	I0717 20:05:10.347429 1101908 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a35378d49f60cb3201ada96dd7ea3b95e7cce0b7f53bd002f18d31a55869c8fc 
	I0717 20:05:10.347449 1101908 cni.go:84] Creating CNI manager for ""
	I0717 20:05:10.347463 1101908 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 20:05:10.349875 1101908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 20:05:06.284772 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:08.777303 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:10.351592 1101908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 20:05:10.370891 1101908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 20:05:10.395381 1101908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 20:05:10.395477 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=old-k8s-version-149000 minikube.k8s.io/updated_at=2023_07_17T20_05_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:10.395473 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:10.663627 1101908 ops.go:34] apiserver oom_adj: -16
	I0717 20:05:10.663730 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:11.311991 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:11.812120 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:11.275701 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:13.277070 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:12.312047 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:12.811579 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:13.311876 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:13.811911 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:14.311514 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:14.811938 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:15.312088 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:15.812089 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:16.312164 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:16.812065 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:15.776961 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:17.778204 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:20.275642 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:17.312322 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:17.811428 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:18.312070 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:18.812245 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:19.311363 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:19.811909 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:20.311343 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:20.811869 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:21.311974 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:21.811429 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:22.311474 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:22.811809 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:23.311574 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:23.812246 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:24.312115 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:24.812132 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:25.311694 1101908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:05:25.457162 1101908 kubeadm.go:1081] duration metric: took 15.061765556s to wait for elevateKubeSystemPrivileges.
	I0717 20:05:25.457213 1101908 kubeadm.go:406] StartCluster complete in 5m47.004786394s
	I0717 20:05:25.457273 1101908 settings.go:142] acquiring lock: {Name:mk3323296c186763db6ebb5a1c5ae94a6b1a6242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:05:25.457431 1101908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:05:25.459593 1101908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/kubeconfig: {Name:mk7f4fe48169c87be980c7edd9dbe55d4ea8b9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:05:25.459942 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 20:05:25.460139 1101908 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 20:05:25.460267 1101908 config.go:182] Loaded profile config "old-k8s-version-149000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 20:05:25.460272 1101908 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-149000"
	I0717 20:05:25.460409 1101908 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-149000"
	W0717 20:05:25.460419 1101908 addons.go:240] addon storage-provisioner should already be in state true
	I0717 20:05:25.460516 1101908 host.go:66] Checking if "old-k8s-version-149000" exists ...
	I0717 20:05:25.460284 1101908 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-149000"
	I0717 20:05:25.460709 1101908 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-149000"
	W0717 20:05:25.460727 1101908 addons.go:240] addon metrics-server should already be in state true
	I0717 20:05:25.460294 1101908 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-149000"
	I0717 20:05:25.460771 1101908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-149000"
	I0717 20:05:25.460793 1101908 host.go:66] Checking if "old-k8s-version-149000" exists ...
	I0717 20:05:25.461033 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.461061 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.461100 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.461128 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.461201 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.461227 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.487047 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0717 20:05:25.487091 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44607
	I0717 20:05:25.487066 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I0717 20:05:25.487833 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.487898 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.487930 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.488571 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.488595 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.488597 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.488615 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.488632 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.488660 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.489058 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.489074 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.489135 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.489284 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.489635 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.489641 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.489654 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.489657 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.498029 1101908 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-149000"
	W0717 20:05:25.498058 1101908 addons.go:240] addon default-storageclass should already be in state true
	I0717 20:05:25.498092 1101908 host.go:66] Checking if "old-k8s-version-149000" exists ...
	I0717 20:05:25.498485 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.498527 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.506931 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0717 20:05:25.507478 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.508080 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.508109 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.508562 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.508845 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.510969 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 20:05:25.513078 1101908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:05:25.511340 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0717 20:05:25.515599 1101908 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:05:25.515626 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 20:05:25.515655 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 20:05:25.516012 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.516682 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.516709 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.517198 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.517438 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.519920 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.520835 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 20:05:25.521176 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 20:05:25.521204 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.523226 1101908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 20:05:22.775399 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:25.278740 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:25.521305 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 20:05:25.523448 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38723
	I0717 20:05:25.525260 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 20:05:25.525280 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 20:05:25.525310 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 20:05:25.525529 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 20:05:25.526263 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 20:05:25.526597 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 20:05:25.527369 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.528329 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.528357 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.528696 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.528792 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.529350 1101908 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:05:25.529381 1101908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:05:25.529649 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 20:05:25.529655 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 20:05:25.529674 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.529823 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 20:05:25.529949 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 20:05:25.530088 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 20:05:25.552954 1101908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I0717 20:05:25.553470 1101908 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:05:25.554117 1101908 main.go:141] libmachine: Using API Version  1
	I0717 20:05:25.554145 1101908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:05:25.554521 1101908 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:05:25.554831 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetState
	I0717 20:05:25.556872 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .DriverName
	I0717 20:05:25.557158 1101908 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 20:05:25.557183 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 20:05:25.557204 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHHostname
	I0717 20:05:25.560114 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.560622 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:d8:03", ip: ""} in network mk-old-k8s-version-149000: {Iface:virbr1 ExpiryTime:2023-07-17 20:59:17 +0000 UTC Type:0 Mac:52:54:00:88:d8:03 Iaid: IPaddr:192.168.50.177 Prefix:24 Hostname:old-k8s-version-149000 Clientid:01:52:54:00:88:d8:03}
	I0717 20:05:25.560656 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | domain old-k8s-version-149000 has defined IP address 192.168.50.177 and MAC address 52:54:00:88:d8:03 in network mk-old-k8s-version-149000
	I0717 20:05:25.561095 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHPort
	I0717 20:05:25.561350 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHKeyPath
	I0717 20:05:25.561512 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .GetSSHUsername
	I0717 20:05:25.561749 1101908 sshutil.go:53] new ssh client: &{IP:192.168.50.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/old-k8s-version-149000/id_rsa Username:docker}
	I0717 20:05:25.724163 1101908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:05:25.749198 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 20:05:25.749231 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 20:05:25.754533 1101908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 20:05:25.757518 1101908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 20:05:25.811831 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 20:05:25.811867 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 20:05:25.893143 1101908 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:05:25.893175 1101908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 20:05:25.994781 1101908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:05:26.019864 1101908 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-149000" context rescaled to 1 replicas
	I0717 20:05:26.019914 1101908 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.177 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:05:26.022777 1101908 out.go:177] * Verifying Kubernetes components...
	I0717 20:05:26.025694 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:05:27.100226 1101908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.376005593s)
	I0717 20:05:27.100282 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100295 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.100306 1101908 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.345727442s)
	I0717 20:05:27.100343 1101908 start.go:917] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0717 20:05:27.100360 1101908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.342808508s)
	I0717 20:05:27.100411 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100426 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.100781 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.100799 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.100810 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100821 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.100866 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.100877 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.100876 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.100885 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.100894 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.101035 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.101065 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.101100 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.101154 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.101170 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.101185 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.101195 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.101423 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.101441 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.101448 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.169038 1101908 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.143298277s)
	I0717 20:05:27.169095 1101908 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-149000" to be "Ready" ...
	I0717 20:05:27.169044 1101908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.174211865s)
	I0717 20:05:27.169278 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.169333 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.169672 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.169782 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.169814 1101908 main.go:141] libmachine: Making call to close driver server
	I0717 20:05:27.169837 1101908 main.go:141] libmachine: (old-k8s-version-149000) Calling .Close
	I0717 20:05:27.169758 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.171950 1101908 main.go:141] libmachine: (old-k8s-version-149000) DBG | Closing plugin on server side
	I0717 20:05:27.171960 1101908 main.go:141] libmachine: Successfully made call to close driver server
	I0717 20:05:27.171979 1101908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 20:05:27.171992 1101908 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-149000"
	I0717 20:05:27.174411 1101908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 20:05:27.777543 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:30.276174 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:27.176695 1101908 addons.go:502] enable addons completed in 1.716545434s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 20:05:27.191392 1101908 node_ready.go:49] node "old-k8s-version-149000" has status "Ready":"True"
	I0717 20:05:27.191435 1101908 node_ready.go:38] duration metric: took 22.324367ms waiting for node "old-k8s-version-149000" to be "Ready" ...
	I0717 20:05:27.191450 1101908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:05:27.203011 1101908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:29.214694 1101908 pod_ready.go:102] pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:31.215215 1101908 pod_ready.go:92] pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace has status "Ready":"True"
	I0717 20:05:31.215244 1101908 pod_ready.go:81] duration metric: took 4.012199031s waiting for pod "coredns-5644d7b6d9-ldwkf" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:31.215265 1101908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t4mmh" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:31.222461 1101908 pod_ready.go:92] pod "kube-proxy-t4mmh" in "kube-system" namespace has status "Ready":"True"
	I0717 20:05:31.222489 1101908 pod_ready.go:81] duration metric: took 7.215944ms waiting for pod "kube-proxy-t4mmh" in "kube-system" namespace to be "Ready" ...
	I0717 20:05:31.222504 1101908 pod_ready.go:38] duration metric: took 4.031041761s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:05:31.222530 1101908 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:05:31.222606 1101908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:05:31.239450 1101908 api_server.go:72] duration metric: took 5.21948786s to wait for apiserver process to appear ...
	I0717 20:05:31.239494 1101908 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:05:31.239520 1101908 api_server.go:253] Checking apiserver healthz at https://192.168.50.177:8443/healthz ...
	I0717 20:05:31.247985 1101908 api_server.go:279] https://192.168.50.177:8443/healthz returned 200:
	ok
	I0717 20:05:31.249351 1101908 api_server.go:141] control plane version: v1.16.0
	I0717 20:05:31.249383 1101908 api_server.go:131] duration metric: took 9.880729ms to wait for apiserver health ...
	I0717 20:05:31.249391 1101908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:05:31.255025 1101908 system_pods.go:59] 4 kube-system pods found
	I0717 20:05:31.255062 1101908 system_pods.go:61] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.255069 1101908 system_pods.go:61] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.255076 1101908 system_pods.go:61] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.255086 1101908 system_pods.go:61] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.255095 1101908 system_pods.go:74] duration metric: took 5.697473ms to wait for pod list to return data ...
	I0717 20:05:31.255106 1101908 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:05:31.259740 1101908 default_sa.go:45] found service account: "default"
	I0717 20:05:31.259772 1101908 default_sa.go:55] duration metric: took 4.660789ms for default service account to be created ...
	I0717 20:05:31.259780 1101908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:05:31.264000 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:31.264044 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.264051 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.264081 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.264093 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.264116 1101908 retry.go:31] will retry after 269.941707ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:31.540816 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:31.540865 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.540876 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.540891 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.540922 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.540951 1101908 retry.go:31] will retry after 335.890023ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:32.287639 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:34.776299 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:31.881678 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:31.881721 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:31.881731 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:31.881742 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:31.881754 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:31.881778 1101908 retry.go:31] will retry after 452.6849ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:32.340889 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:32.340919 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:32.340924 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:32.340931 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:32.340938 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:32.340954 1101908 retry.go:31] will retry after 433.94285ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:32.780743 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:32.780777 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:32.780784 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:32.780795 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:32.780808 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:32.780830 1101908 retry.go:31] will retry after 664.997213ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:33.450870 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:33.450901 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:33.450906 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:33.450912 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:33.450919 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:33.450936 1101908 retry.go:31] will retry after 669.043592ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:34.126116 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:34.126155 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:34.126164 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:34.126177 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:34.126187 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:34.126207 1101908 retry.go:31] will retry after 799.422303ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:34.930555 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:34.930595 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:34.930604 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:34.930614 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:34.930624 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:34.930648 1101908 retry.go:31] will retry after 1.329879988s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:36.266531 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:36.266570 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:36.266578 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:36.266586 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:36.266596 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:36.266616 1101908 retry.go:31] will retry after 1.667039225s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:37.275872 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:39.776283 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:37.940699 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:37.940736 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:37.940746 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:37.940756 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:37.940768 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:37.940793 1101908 retry.go:31] will retry after 1.426011935s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:39.371704 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:39.371738 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:39.371743 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:39.371750 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:39.371757 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:39.371775 1101908 retry.go:31] will retry after 2.864830097s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:42.276143 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:44.775621 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:42.241652 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:42.241693 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:42.241701 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:42.241713 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:42.241723 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:42.241744 1101908 retry.go:31] will retry after 2.785860959s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:45.034761 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:45.034793 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:45.034798 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:45.034806 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:45.034818 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:45.034839 1101908 retry.go:31] will retry after 3.037872313s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:46.776795 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:49.276343 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:48.078790 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:48.078826 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:48.078831 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:48.078842 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:48.078849 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:48.078867 1101908 retry.go:31] will retry after 4.546196458s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:51.777942 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:54.274279 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:52.631941 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:52.631986 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:52.631995 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:52.632006 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:52.632017 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:52.632043 1101908 retry.go:31] will retry after 6.391777088s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:05:56.276359 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:58.277520 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:05:59.036918 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:05:59.036951 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:05:59.036956 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:05:59.036963 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:05:59.036970 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:05:59.036988 1101908 retry.go:31] will retry after 5.758521304s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:06:00.776149 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:03.276291 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:05.276530 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:04.801914 1101908 system_pods.go:86] 4 kube-system pods found
	I0717 20:06:04.801944 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:06:04.801950 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:06:04.801958 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:06:04.801965 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:06:04.801982 1101908 retry.go:31] will retry after 7.046104479s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 20:06:07.777447 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:10.275741 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:12.776577 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:14.776717 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:11.856116 1101908 system_pods.go:86] 8 kube-system pods found
	I0717 20:06:11.856165 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:06:11.856175 1101908 system_pods.go:89] "etcd-old-k8s-version-149000" [702c8e9f-d99a-4766-af97-550dc956f093] Pending
	I0717 20:06:11.856183 1101908 system_pods.go:89] "kube-apiserver-old-k8s-version-149000" [0f0c9817-f4c9-4266-b576-c270cea11b4b] Pending
	I0717 20:06:11.856191 1101908 system_pods.go:89] "kube-controller-manager-old-k8s-version-149000" [539db0c4-6e8c-42eb-9b73-686de5f6c7bf] Running
	I0717 20:06:11.856207 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:06:11.856216 1101908 system_pods.go:89] "kube-scheduler-old-k8s-version-149000" [5a27a0f7-c6c9-4324-a51c-d33c205d8724] Running
	I0717 20:06:11.856295 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:06:11.856308 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:06:11.856336 1101908 retry.go:31] will retry after 13.224383762s: missing components: etcd, kube-apiserver
	I0717 20:06:16.779816 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:19.275840 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:25.091227 1101908 system_pods.go:86] 8 kube-system pods found
	I0717 20:06:25.091272 1101908 system_pods.go:89] "coredns-5644d7b6d9-ldwkf" [1f5b5b78-acc2-460b-971e-349b7f30a211] Running
	I0717 20:06:25.091281 1101908 system_pods.go:89] "etcd-old-k8s-version-149000" [702c8e9f-d99a-4766-af97-550dc956f093] Running
	I0717 20:06:25.091288 1101908 system_pods.go:89] "kube-apiserver-old-k8s-version-149000" [0f0c9817-f4c9-4266-b576-c270cea11b4b] Running
	I0717 20:06:25.091298 1101908 system_pods.go:89] "kube-controller-manager-old-k8s-version-149000" [539db0c4-6e8c-42eb-9b73-686de5f6c7bf] Running
	I0717 20:06:25.091305 1101908 system_pods.go:89] "kube-proxy-t4mmh" [570c5c22-efff-40bb-8ade-e1febdbff4f1] Running
	I0717 20:06:25.091312 1101908 system_pods.go:89] "kube-scheduler-old-k8s-version-149000" [5a27a0f7-c6c9-4324-a51c-d33c205d8724] Running
	I0717 20:06:25.091324 1101908 system_pods.go:89] "metrics-server-74d5856cc6-cxzws" [493d4f17-8ddf-4d76-aa86-33fc669de018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:06:25.091337 1101908 system_pods.go:89] "storage-provisioner" [cf78f6d0-4bf8-449c-8231-0df3920b8b1f] Running
	I0717 20:06:25.091348 1101908 system_pods.go:126] duration metric: took 53.831561334s to wait for k8s-apps to be running ...
	I0717 20:06:25.091360 1101908 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:06:25.091455 1101908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:06:25.119739 1101908 system_svc.go:56] duration metric: took 28.348212ms WaitForService to wait for kubelet.
	I0717 20:06:25.119804 1101908 kubeadm.go:581] duration metric: took 59.099852409s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:06:25.119854 1101908 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:06:25.123561 1101908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:06:25.123592 1101908 node_conditions.go:123] node cpu capacity is 2
	I0717 20:06:25.123606 1101908 node_conditions.go:105] duration metric: took 3.739793ms to run NodePressure ...
	I0717 20:06:25.123618 1101908 start.go:228] waiting for startup goroutines ...
	I0717 20:06:25.123624 1101908 start.go:233] waiting for cluster config update ...
	I0717 20:06:25.123669 1101908 start.go:242] writing updated cluster config ...
	I0717 20:06:25.124104 1101908 ssh_runner.go:195] Run: rm -f paused
	I0717 20:06:25.182838 1101908 start.go:578] kubectl: 1.27.3, cluster: 1.16.0 (minor skew: 11)
	I0717 20:06:25.185766 1101908 out.go:177] 
	W0717 20:06:25.188227 1101908 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.16.0.
	I0717 20:06:25.190452 1101908 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0717 20:06:25.192660 1101908 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-149000" cluster and "default" namespace by default
	I0717 20:06:21.776152 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:23.776276 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:25.781589 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:28.278450 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:30.775293 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:33.276069 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:35.775650 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:37.777006 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:40.275701 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:42.774969 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:44.775928 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:46.776363 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:48.786345 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:51.276618 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:53.776161 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:56.276037 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:06:58.276310 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:00.276357 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:02.775722 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:04.775945 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:07.280130 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:09.776589 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:12.277066 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:14.775525 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:17.275601 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:19.777143 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:22.286857 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:24.775908 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:26.779341 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:29.275732 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:31.276783 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:33.776286 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:36.274383 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:38.275384 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:40.775469 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:42.776331 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:44.776843 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:47.276067 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:49.276907 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:51.277652 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:53.776315 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:55.780034 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:07:58.276277 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:00.776903 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:03.276429 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:05.277182 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:07.776330 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:09.777528 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:12.275388 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:14.275926 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:16.776757 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:19.276466 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:21.276544 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:23.775888 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:25.778534 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:28.277897 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:30.775389 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:32.777134 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:34.777503 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:37.276492 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:39.775380 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:41.777135 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:44.276305 1103141 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace has status "Ready":"False"
	I0717 20:08:45.868652 1103141 pod_ready.go:81] duration metric: took 4m0.000895459s waiting for pod "metrics-server-74d5c6b9c-jvfz8" in "kube-system" namespace to be "Ready" ...
	E0717 20:08:45.868703 1103141 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 20:08:45.868714 1103141 pod_ready.go:38] duration metric: took 4m2.373683506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:08:45.868742 1103141 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:08:45.868791 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:08:45.868907 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:08:45.926927 1103141 cri.go:89] found id: "b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:45.926965 1103141 cri.go:89] found id: ""
	I0717 20:08:45.926977 1103141 logs.go:284] 1 containers: [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f]
	I0717 20:08:45.927049 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:45.932247 1103141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:08:45.932335 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:08:45.976080 1103141 cri.go:89] found id: "6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:45.976176 1103141 cri.go:89] found id: ""
	I0717 20:08:45.976200 1103141 logs.go:284] 1 containers: [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8]
	I0717 20:08:45.976287 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:45.981650 1103141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:08:45.981738 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:08:46.017454 1103141 cri.go:89] found id: "9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:46.017487 1103141 cri.go:89] found id: ""
	I0717 20:08:46.017495 1103141 logs.go:284] 1 containers: [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e]
	I0717 20:08:46.017578 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.023282 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:08:46.023361 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:08:46.055969 1103141 cri.go:89] found id: "20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:46.055998 1103141 cri.go:89] found id: ""
	I0717 20:08:46.056009 1103141 logs.go:284] 1 containers: [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52]
	I0717 20:08:46.056063 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.061090 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:08:46.061181 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:08:46.094968 1103141 cri.go:89] found id: "c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:46.095001 1103141 cri.go:89] found id: ""
	I0717 20:08:46.095012 1103141 logs.go:284] 1 containers: [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39]
	I0717 20:08:46.095089 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.099940 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:08:46.100018 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:08:46.132535 1103141 cri.go:89] found id: "7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:46.132571 1103141 cri.go:89] found id: ""
	I0717 20:08:46.132586 1103141 logs.go:284] 1 containers: [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd]
	I0717 20:08:46.132655 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.138029 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:08:46.138112 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:08:46.179589 1103141 cri.go:89] found id: ""
	I0717 20:08:46.179620 1103141 logs.go:284] 0 containers: []
	W0717 20:08:46.179632 1103141 logs.go:286] No container was found matching "kindnet"
	I0717 20:08:46.179640 1103141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:08:46.179728 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:08:46.216615 1103141 cri.go:89] found id: "1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:46.216642 1103141 cri.go:89] found id: ""
	I0717 20:08:46.216650 1103141 logs.go:284] 1 containers: [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea]
	I0717 20:08:46.216782 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:46.223815 1103141 logs.go:123] Gathering logs for kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] ...
	I0717 20:08:46.223849 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:46.274046 1103141 logs.go:123] Gathering logs for kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] ...
	I0717 20:08:46.274093 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:46.314239 1103141 logs.go:123] Gathering logs for kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] ...
	I0717 20:08:46.314285 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:46.372521 1103141 logs.go:123] Gathering logs for kubelet ...
	I0717 20:08:46.372568 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:08:46.473516 1103141 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:08:46.473576 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:08:46.628553 1103141 logs.go:123] Gathering logs for coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] ...
	I0717 20:08:46.628626 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:46.663929 1103141 logs.go:123] Gathering logs for storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] ...
	I0717 20:08:46.663976 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:46.699494 1103141 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:08:46.699528 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:08:47.188357 1103141 logs.go:123] Gathering logs for container status ...
	I0717 20:08:47.188415 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:08:47.246863 1103141 logs.go:123] Gathering logs for dmesg ...
	I0717 20:08:47.246902 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:08:47.262383 1103141 logs.go:123] Gathering logs for kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] ...
	I0717 20:08:47.262418 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:47.315465 1103141 logs.go:123] Gathering logs for etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] ...
	I0717 20:08:47.315506 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:49.862911 1103141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:08:49.880685 1103141 api_server.go:72] duration metric: took 4m9.416465331s to wait for apiserver process to appear ...
	I0717 20:08:49.880717 1103141 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:08:49.880763 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:08:49.880828 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:08:49.921832 1103141 cri.go:89] found id: "b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:49.921858 1103141 cri.go:89] found id: ""
	I0717 20:08:49.921867 1103141 logs.go:284] 1 containers: [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f]
	I0717 20:08:49.921922 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:49.927202 1103141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:08:49.927281 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:08:49.962760 1103141 cri.go:89] found id: "6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:49.962784 1103141 cri.go:89] found id: ""
	I0717 20:08:49.962793 1103141 logs.go:284] 1 containers: [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8]
	I0717 20:08:49.962850 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:49.968029 1103141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:08:49.968123 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:08:50.004191 1103141 cri.go:89] found id: "9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:50.004230 1103141 cri.go:89] found id: ""
	I0717 20:08:50.004239 1103141 logs.go:284] 1 containers: [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e]
	I0717 20:08:50.004308 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.009150 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:08:50.009223 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:08:50.041085 1103141 cri.go:89] found id: "20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:50.041109 1103141 cri.go:89] found id: ""
	I0717 20:08:50.041118 1103141 logs.go:284] 1 containers: [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52]
	I0717 20:08:50.041170 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.045541 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:08:50.045632 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:08:50.082404 1103141 cri.go:89] found id: "c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:50.082439 1103141 cri.go:89] found id: ""
	I0717 20:08:50.082448 1103141 logs.go:284] 1 containers: [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39]
	I0717 20:08:50.082510 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.087838 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:08:50.087928 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:08:50.130019 1103141 cri.go:89] found id: "7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:50.130053 1103141 cri.go:89] found id: ""
	I0717 20:08:50.130065 1103141 logs.go:284] 1 containers: [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd]
	I0717 20:08:50.130134 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.134894 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:08:50.134974 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:08:50.171033 1103141 cri.go:89] found id: ""
	I0717 20:08:50.171070 1103141 logs.go:284] 0 containers: []
	W0717 20:08:50.171081 1103141 logs.go:286] No container was found matching "kindnet"
	I0717 20:08:50.171088 1103141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:08:50.171158 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:08:50.206952 1103141 cri.go:89] found id: "1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:50.206984 1103141 cri.go:89] found id: ""
	I0717 20:08:50.206996 1103141 logs.go:284] 1 containers: [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea]
	I0717 20:08:50.207064 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:50.211123 1103141 logs.go:123] Gathering logs for etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] ...
	I0717 20:08:50.211152 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:50.257982 1103141 logs.go:123] Gathering logs for coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] ...
	I0717 20:08:50.258031 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:50.293315 1103141 logs.go:123] Gathering logs for kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] ...
	I0717 20:08:50.293371 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:50.343183 1103141 logs.go:123] Gathering logs for kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] ...
	I0717 20:08:50.343235 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:50.381821 1103141 logs.go:123] Gathering logs for kubelet ...
	I0717 20:08:50.381869 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:08:50.487833 1103141 logs.go:123] Gathering logs for dmesg ...
	I0717 20:08:50.487878 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:08:50.504213 1103141 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:08:50.504259 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:08:50.638194 1103141 logs.go:123] Gathering logs for kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] ...
	I0717 20:08:50.638230 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:50.685572 1103141 logs.go:123] Gathering logs for kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] ...
	I0717 20:08:50.685627 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:50.740133 1103141 logs.go:123] Gathering logs for storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] ...
	I0717 20:08:50.740188 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:50.778023 1103141 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:08:50.778059 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:08:51.310702 1103141 logs.go:123] Gathering logs for container status ...
	I0717 20:08:51.310758 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:08:53.857949 1103141 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I0717 20:08:53.864729 1103141 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I0717 20:08:53.866575 1103141 api_server.go:141] control plane version: v1.27.3
	I0717 20:08:53.866605 1103141 api_server.go:131] duration metric: took 3.985881495s to wait for apiserver health ...
	I0717 20:08:53.866613 1103141 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:08:53.866638 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 20:08:53.866687 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 20:08:53.902213 1103141 cri.go:89] found id: "b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:53.902243 1103141 cri.go:89] found id: ""
	I0717 20:08:53.902252 1103141 logs.go:284] 1 containers: [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f]
	I0717 20:08:53.902320 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:53.906976 1103141 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 20:08:53.907073 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 20:08:53.946040 1103141 cri.go:89] found id: "6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:53.946063 1103141 cri.go:89] found id: ""
	I0717 20:08:53.946071 1103141 logs.go:284] 1 containers: [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8]
	I0717 20:08:53.946150 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:53.951893 1103141 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 20:08:53.951963 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 20:08:53.988546 1103141 cri.go:89] found id: "9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:53.988583 1103141 cri.go:89] found id: ""
	I0717 20:08:53.988594 1103141 logs.go:284] 1 containers: [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e]
	I0717 20:08:53.988647 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:53.994338 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 20:08:53.994428 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 20:08:54.030092 1103141 cri.go:89] found id: "20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:54.030123 1103141 cri.go:89] found id: ""
	I0717 20:08:54.030133 1103141 logs.go:284] 1 containers: [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52]
	I0717 20:08:54.030198 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.035081 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 20:08:54.035189 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 20:08:54.069845 1103141 cri.go:89] found id: "c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:54.069878 1103141 cri.go:89] found id: ""
	I0717 20:08:54.069889 1103141 logs.go:284] 1 containers: [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39]
	I0717 20:08:54.069952 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.075257 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 20:08:54.075334 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 20:08:54.114477 1103141 cri.go:89] found id: "7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:54.114516 1103141 cri.go:89] found id: ""
	I0717 20:08:54.114527 1103141 logs.go:284] 1 containers: [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd]
	I0717 20:08:54.114602 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.119374 1103141 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 20:08:54.119464 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 20:08:54.160628 1103141 cri.go:89] found id: ""
	I0717 20:08:54.160660 1103141 logs.go:284] 0 containers: []
	W0717 20:08:54.160672 1103141 logs.go:286] No container was found matching "kindnet"
	I0717 20:08:54.160680 1103141 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 20:08:54.160752 1103141 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 20:08:54.200535 1103141 cri.go:89] found id: "1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:54.200662 1103141 cri.go:89] found id: ""
	I0717 20:08:54.200674 1103141 logs.go:284] 1 containers: [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea]
	I0717 20:08:54.200736 1103141 ssh_runner.go:195] Run: which crictl
	I0717 20:08:54.205923 1103141 logs.go:123] Gathering logs for dmesg ...
	I0717 20:08:54.205958 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 20:08:54.221020 1103141 logs.go:123] Gathering logs for describe nodes ...
	I0717 20:08:54.221057 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 20:08:54.381122 1103141 logs.go:123] Gathering logs for coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] ...
	I0717 20:08:54.381163 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e"
	I0717 20:08:54.417207 1103141 logs.go:123] Gathering logs for kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] ...
	I0717 20:08:54.417255 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52"
	I0717 20:08:54.469346 1103141 logs.go:123] Gathering logs for kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] ...
	I0717 20:08:54.469389 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39"
	I0717 20:08:54.513216 1103141 logs.go:123] Gathering logs for CRI-O ...
	I0717 20:08:54.513258 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 20:08:55.056597 1103141 logs.go:123] Gathering logs for kubelet ...
	I0717 20:08:55.056644 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 20:08:55.168622 1103141 logs.go:123] Gathering logs for kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] ...
	I0717 20:08:55.168669 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f"
	I0717 20:08:55.220979 1103141 logs.go:123] Gathering logs for etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] ...
	I0717 20:08:55.221038 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8"
	I0717 20:08:55.264086 1103141 logs.go:123] Gathering logs for kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] ...
	I0717 20:08:55.264124 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd"
	I0717 20:08:55.317931 1103141 logs.go:123] Gathering logs for storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] ...
	I0717 20:08:55.317974 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea"
	I0717 20:08:55.357733 1103141 logs.go:123] Gathering logs for container status ...
	I0717 20:08:55.357770 1103141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 20:08:57.919739 1103141 system_pods.go:59] 8 kube-system pods found
	I0717 20:08:57.919785 1103141 system_pods.go:61] "coredns-5d78c9869d-gq2b2" [833e67fa-16e2-4a5c-8c39-16cc4fbd411e] Running
	I0717 20:08:57.919795 1103141 system_pods.go:61] "etcd-embed-certs-114855" [7209c449-fbf1-4343-8636-e872684db832] Running
	I0717 20:08:57.919808 1103141 system_pods.go:61] "kube-apiserver-embed-certs-114855" [d926dfc1-71e8-44cb-9efe-4c37e0982b02] Running
	I0717 20:08:57.919817 1103141 system_pods.go:61] "kube-controller-manager-embed-certs-114855" [e16de906-3b66-4882-83ca-8d5476d45d96] Running
	I0717 20:08:57.919823 1103141 system_pods.go:61] "kube-proxy-bfvnl" [6f7fb55d-fa9f-4d08-b4ab-3814af550c01] Running
	I0717 20:08:57.919830 1103141 system_pods.go:61] "kube-scheduler-embed-certs-114855" [828c7a2f-dd4b-4318-8199-026970bb3159] Running
	I0717 20:08:57.919850 1103141 system_pods.go:61] "metrics-server-74d5c6b9c-jvfz8" [f861e320-9125-4081-b043-c90d8b027f71] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:08:57.919859 1103141 system_pods.go:61] "storage-provisioner" [994ec0db-08aa-4dd5-a137-1f6984051e65] Running
	I0717 20:08:57.919866 1103141 system_pods.go:74] duration metric: took 4.053247674s to wait for pod list to return data ...
	I0717 20:08:57.919876 1103141 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:08:57.925726 1103141 default_sa.go:45] found service account: "default"
	I0717 20:08:57.925756 1103141 default_sa.go:55] duration metric: took 5.874288ms for default service account to be created ...
	I0717 20:08:57.925765 1103141 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:08:57.934835 1103141 system_pods.go:86] 8 kube-system pods found
	I0717 20:08:57.934869 1103141 system_pods.go:89] "coredns-5d78c9869d-gq2b2" [833e67fa-16e2-4a5c-8c39-16cc4fbd411e] Running
	I0717 20:08:57.934875 1103141 system_pods.go:89] "etcd-embed-certs-114855" [7209c449-fbf1-4343-8636-e872684db832] Running
	I0717 20:08:57.934880 1103141 system_pods.go:89] "kube-apiserver-embed-certs-114855" [d926dfc1-71e8-44cb-9efe-4c37e0982b02] Running
	I0717 20:08:57.934886 1103141 system_pods.go:89] "kube-controller-manager-embed-certs-114855" [e16de906-3b66-4882-83ca-8d5476d45d96] Running
	I0717 20:08:57.934890 1103141 system_pods.go:89] "kube-proxy-bfvnl" [6f7fb55d-fa9f-4d08-b4ab-3814af550c01] Running
	I0717 20:08:57.934894 1103141 system_pods.go:89] "kube-scheduler-embed-certs-114855" [828c7a2f-dd4b-4318-8199-026970bb3159] Running
	I0717 20:08:57.934903 1103141 system_pods.go:89] "metrics-server-74d5c6b9c-jvfz8" [f861e320-9125-4081-b043-c90d8b027f71] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:08:57.934908 1103141 system_pods.go:89] "storage-provisioner" [994ec0db-08aa-4dd5-a137-1f6984051e65] Running
	I0717 20:08:57.934917 1103141 system_pods.go:126] duration metric: took 9.146607ms to wait for k8s-apps to be running ...
	I0717 20:08:57.934924 1103141 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:08:57.934972 1103141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:08:57.952480 1103141 system_svc.go:56] duration metric: took 17.537719ms WaitForService to wait for kubelet.
	I0717 20:08:57.952531 1103141 kubeadm.go:581] duration metric: took 4m17.48831739s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:08:57.952581 1103141 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:08:57.956510 1103141 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 20:08:57.956581 1103141 node_conditions.go:123] node cpu capacity is 2
	I0717 20:08:57.956599 1103141 node_conditions.go:105] duration metric: took 4.010178ms to run NodePressure ...
	I0717 20:08:57.956633 1103141 start.go:228] waiting for startup goroutines ...
	I0717 20:08:57.956646 1103141 start.go:233] waiting for cluster config update ...
	I0717 20:08:57.956665 1103141 start.go:242] writing updated cluster config ...
	I0717 20:08:57.957107 1103141 ssh_runner.go:195] Run: rm -f paused
	I0717 20:08:58.016891 1103141 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:08:58.019566 1103141 out.go:177] * Done! kubectl is now configured to use "embed-certs-114855" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 19:59:16 UTC, ends at Mon 2023-07-17 20:17:51 UTC. --
	Jul 17 20:17:50 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:50.644105590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eca5a60c-7927-4666-8469-30bb9196235e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:50 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:50.911234845Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6cca6ac6-8990-4e34-986c-19606b5cd502 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:50 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:50.911404060Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6cca6ac6-8990-4e34-986c-19606b5cd502 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:50 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:50.911740553Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6cca6ac6-8990-4e34-986c-19606b5cd502 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:50 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:50.951633430Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1a932388-f7a0-412f-8236-d06044dc0a4c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:50 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:50.951700880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1a932388-f7a0-412f-8236-d06044dc0a4c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:50 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:50.951947426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1a932388-f7a0-412f-8236-d06044dc0a4c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:50 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:50.995333594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b955711b-fc4c-4395-b66f-fe4a2151087f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:50 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:50.995406629Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b955711b-fc4c-4395-b66f-fe4a2151087f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:50 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:50.995650031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b955711b-fc4c-4395-b66f-fe4a2151087f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.040038114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7ec6b525-6ad1-4d6a-80e0-337c3fe83869 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.040142962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7ec6b525-6ad1-4d6a-80e0-337c3fe83869 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.040412623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7ec6b525-6ad1-4d6a-80e0-337c3fe83869 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.079995564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1cde53dc-81e6-4817-9963-62b9656d7918 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.080063617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1cde53dc-81e6-4817-9963-62b9656d7918 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.080243855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1cde53dc-81e6-4817-9963-62b9656d7918 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.123882794Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=40fefec1-b1d5-4452-a1ce-6d06f0774cb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.123953094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=40fefec1-b1d5-4452-a1ce-6d06f0774cb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.124234558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=40fefec1-b1d5-4452-a1ce-6d06f0774cb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.163724926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f25d3e0e-c779-46b4-a12d-f3bd590cf2d8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.163799540Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f25d3e0e-c779-46b4-a12d-f3bd590cf2d8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.164082618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f25d3e0e-c779-46b4-a12d-f3bd590cf2d8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.201407842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ef415aa5-2421-4e14-9463-e3695405075b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.201500430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ef415aa5-2421-4e14-9463-e3695405075b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:17:51 old-k8s-version-149000 crio[711]: time="2023-07-17 20:17:51.201844391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df,PodSandboxId:201373d9426a8631d79520d2a7f3c9697604f0121db2b5d56757ed251f902e6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624328673689008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf78f6d0-4bf8-449c-8231-0df3920b8b1f,},Annotations:map[string]string{io.kubernetes.container.hash: c51d53e8,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643,PodSandboxId:464af7fa5ab483e044eb1167c4ba36b55303430e053f5bb0bf2606adbf3c0ddf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689624327993490169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ldwkf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5b5b78-acc2-460b-971e-349b7f30a211,},Annotations:map[string]string{io.kubernetes.container.hash: 70a55d56,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e,PodSandboxId:9711ef4b247172907cc82132ada3c7a750cdc551fe24183fafc47e3413b45e64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689624327803406008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t4mmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570c5
c22-efff-40bb-8ade-e1febdbff4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 65866be2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52,PodSandboxId:c913970e62f86c6ef0b732b599998b575ee8e6b016f3ab8e6c8fab2f6be1955e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689624301402020936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a056e5359f37632ba7566002c292f817,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73b4374e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0,PodSandboxId:5b423dca17cb5e2f83cd8b2352af0c13bb915574510673982d39410069ce1b0a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689624300167749289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866,PodSandboxId:bbf496ac378c4552e8ecae7298b7c89a386436f9e8b30c7fd93ff5104b0bc4bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689624299913235918,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1346e3f0df0827495f5afc7d45c69f1,},Annotations:map[string]string{io.kubern
etes.container.hash: b570d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c,PodSandboxId:4845123d26cfcfbb998c05fcec7aaacde3f02951fd751df59783221569dbed11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689624299708607138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-149000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map
[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ef415aa5-2421-4e14-9463-e3695405075b name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	bf6835e7df11c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   201373d9426a8
	16dcbd7056062       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   12 minutes ago      Running             coredns                   0                   464af7fa5ab48
	d2b328b6d3a7f       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   12 minutes ago      Running             kube-proxy                0                   9711ef4b24717
	5176d659c2276       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   12 minutes ago      Running             etcd                      0                   c913970e62f86
	d1a21acc33de8       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   12 minutes ago      Running             kube-scheduler            0                   5b423dca17cb5
	9fa9baa16256a       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   12 minutes ago      Running             kube-apiserver            0                   bbf496ac378c4
	a07469cd5bd2e       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   12 minutes ago      Running             kube-controller-manager   0                   4845123d26cfc
	
	* 
	* ==> coredns [16dcbd7056062f7da96b9b046ea6ab4939f98b862f629f6379ce5c6496a9c643] <==
	* .:53
	2023-07-17T20:05:28.462Z [INFO] plugin/reload: Running configuration MD5 = 06ff7f9bb57317d7ab02f5fb9baaa00d
	2023-07-17T20:05:28.463Z [INFO] CoreDNS-1.6.2
	2023-07-17T20:05:28.463Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-07-17T20:05:28.480Z [INFO] 127.0.0.1:33108 - 42238 "HINFO IN 8359485099469103757.6097109787848091355. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014647044s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-149000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-149000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=old-k8s-version-149000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T20_05_10_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 20:05:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 20:17:05 +0000   Mon, 17 Jul 2023 20:05:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 20:17:05 +0000   Mon, 17 Jul 2023 20:05:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 20:17:05 +0000   Mon, 17 Jul 2023 20:05:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 20:17:05 +0000   Mon, 17 Jul 2023 20:05:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.177
	  Hostname:    old-k8s-version-149000
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 6b77956aa43d4cc8852ff5e5c774a7ae
	 System UUID:                6b77956a-a43d-4cc8-852f-f5e5c774a7ae
	 Boot ID:                    f3291a84-0139-43be-94c0-25c5c67f2cac
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-ldwkf                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                etcd-old-k8s-version-149000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-apiserver-old-k8s-version-149000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-149000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-t4mmh                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-149000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                metrics-server-74d5856cc6-cxzws                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  Starting                 12m                kubelet, old-k8s-version-149000     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-149000     Node old-k8s-version-149000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet, old-k8s-version-149000     Node old-k8s-version-149000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet, old-k8s-version-149000     Node old-k8s-version-149000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet, old-k8s-version-149000     Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-149000  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jul17 19:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.092674] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.241092] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.726038] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.163907] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.676919] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.839557] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.127668] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.189640] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.135101] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.265498] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[ +20.134693] systemd-fstab-generator[1035]: Ignoring "noauto" for root device
	[  +0.489721] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul17 20:00] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.755286] kauditd_printk_skb: 2 callbacks suppressed
	[Jul17 20:04] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.747541] systemd-fstab-generator[3108]: Ignoring "noauto" for root device
	[Jul17 20:05] kauditd_printk_skb: 4 callbacks suppressed
	
	* 
	* ==> etcd [5176d659c2276cb4d2ea690ede65a41255699284bbd1fd881eaa6630e55a2f52] <==
	* 2023-07-17 20:05:01.575872 I | raft: 5a25ba9993a27c1b became follower at term 0
	2023-07-17 20:05:01.575908 I | raft: newRaft 5a25ba9993a27c1b [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-07-17 20:05:01.575930 I | raft: 5a25ba9993a27c1b became follower at term 1
	2023-07-17 20:05:01.595004 W | auth: simple token is not cryptographically signed
	2023-07-17 20:05:01.601503 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-07-17 20:05:01.603740 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-17 20:05:01.603945 I | embed: listening for metrics on http://192.168.50.177:2381
	2023-07-17 20:05:01.604288 I | etcdserver: 5a25ba9993a27c1b as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-07-17 20:05:01.605180 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-17 20:05:01.605457 I | etcdserver/membership: added member 5a25ba9993a27c1b [https://192.168.50.177:2380] to cluster 3f06f8e9368d6a9e
	2023-07-17 20:05:01.676646 I | raft: 5a25ba9993a27c1b is starting a new election at term 1
	2023-07-17 20:05:01.676921 I | raft: 5a25ba9993a27c1b became candidate at term 2
	2023-07-17 20:05:01.677033 I | raft: 5a25ba9993a27c1b received MsgVoteResp from 5a25ba9993a27c1b at term 2
	2023-07-17 20:05:01.677064 I | raft: 5a25ba9993a27c1b became leader at term 2
	2023-07-17 20:05:01.677179 I | raft: raft.node: 5a25ba9993a27c1b elected leader 5a25ba9993a27c1b at term 2
	2023-07-17 20:05:01.677792 I | etcdserver: published {Name:old-k8s-version-149000 ClientURLs:[https://192.168.50.177:2379]} to cluster 3f06f8e9368d6a9e
	2023-07-17 20:05:01.677907 I | embed: ready to serve client requests
	2023-07-17 20:05:01.677975 I | embed: ready to serve client requests
	2023-07-17 20:05:01.679237 I | embed: serving client requests on 192.168.50.177:2379
	2023-07-17 20:05:01.679405 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-17 20:05:01.679783 I | etcdserver: setting up the initial cluster version to 3.3
	2023-07-17 20:05:01.687694 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-07-17 20:05:01.687771 I | etcdserver/api: enabled capabilities for version 3.3
	2023-07-17 20:15:01.913475 I | mvcc: store.index: compact 679
	2023-07-17 20:15:01.916690 I | mvcc: finished scheduled compaction at 679 (took 2.624362ms)
	
	* 
	* ==> kernel <==
	*  20:17:51 up 18 min,  0 users,  load average: 0.32, 0.37, 0.27
	Linux old-k8s-version-149000 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [9fa9baa16256a3310adcfd7fde65c47fa6f991f723fdf0bb190f0c32fc383866] <==
	* I0717 20:10:06.342295       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 20:10:06.342844       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 20:10:06.342975       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:10:06.343023       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:11:06.343455       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 20:11:06.343782       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 20:11:06.343859       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:11:06.343883       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:13:06.344383       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 20:13:06.344583       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 20:13:06.344656       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:13:06.344668       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:15:06.346041       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 20:15:06.346588       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 20:15:06.346724       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:15:06.346778       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:16:06.347308       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 20:16:06.347717       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 20:16:06.347819       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:16:06.347847       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [a07469cd5bd2e9a2b83c86a2e1f90f52d958ace45f8dfef864be1b2577e9a77c] <==
	* E0717 20:11:28.665735       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:11:50.665084       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:11:58.918090       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:12:22.667461       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:12:29.170326       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:12:54.669756       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:12:59.422415       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:13:26.671888       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:13:29.674402       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:13:58.675379       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:13:59.927054       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0717 20:14:30.178954       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:14:30.678251       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:15:00.431689       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:15:02.680092       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:15:30.683965       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:15:34.682328       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:16:00.936337       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:16:06.685105       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:16:31.189350       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:16:38.687431       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:17:01.442028       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:17:10.689731       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 20:17:31.694320       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 20:17:42.692090       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [d2b328b6d3a7f052482d4e509607c564e32c1d2ba6ee21889b319e62312de18e] <==
	* W0717 20:05:28.495472       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0717 20:05:28.529210       1 node.go:135] Successfully retrieved node IP: 192.168.50.177
	I0717 20:05:28.529351       1 server_others.go:149] Using iptables Proxier.
	I0717 20:05:28.530891       1 server.go:529] Version: v1.16.0
	I0717 20:05:28.534086       1 config.go:313] Starting service config controller
	I0717 20:05:28.534477       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0717 20:05:28.537757       1 config.go:131] Starting endpoints config controller
	I0717 20:05:28.557264       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0717 20:05:28.653335       1 shared_informer.go:204] Caches are synced for service config 
	I0717 20:05:28.658263       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [d1a21acc33de8e9b79400d86acf9f8064b5ccae4ef21414b9aa2deef6a431ff0] <==
	* I0717 20:05:05.339633       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0717 20:05:05.389246       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 20:05:05.389370       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 20:05:05.394737       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 20:05:05.395010       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 20:05:05.395074       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 20:05:05.395126       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 20:05:05.395185       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 20:05:05.395298       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 20:05:05.395349       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 20:05:05.395393       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 20:05:05.396892       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 20:05:06.391861       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 20:05:06.398008       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 20:05:06.399950       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 20:05:06.400164       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 20:05:06.401449       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 20:05:06.402452       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 20:05:06.403383       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 20:05:06.407264       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 20:05:06.407870       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 20:05:06.410066       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 20:05:06.412220       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 20:05:25.318005       1 factory.go:585] pod is already present in the activeQ
	E0717 20:05:25.355627       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 19:59:16 UTC, ends at Mon 2023-07-17 20:17:51 UTC. --
	Jul 17 20:13:18 old-k8s-version-149000 kubelet[3114]: E0717 20:13:18.147263    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:13:32 old-k8s-version-149000 kubelet[3114]: E0717 20:13:32.148049    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:13:45 old-k8s-version-149000 kubelet[3114]: E0717 20:13:45.147321    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:13:56 old-k8s-version-149000 kubelet[3114]: E0717 20:13:56.147234    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:14:09 old-k8s-version-149000 kubelet[3114]: E0717 20:14:09.146764    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:14:20 old-k8s-version-149000 kubelet[3114]: E0717 20:14:20.148766    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:14:34 old-k8s-version-149000 kubelet[3114]: E0717 20:14:34.146930    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:14:47 old-k8s-version-149000 kubelet[3114]: E0717 20:14:47.146975    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:14:58 old-k8s-version-149000 kubelet[3114]: E0717 20:14:58.240263    3114 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jul 17 20:15:02 old-k8s-version-149000 kubelet[3114]: E0717 20:15:02.146637    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:15:13 old-k8s-version-149000 kubelet[3114]: E0717 20:15:13.146705    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:15:28 old-k8s-version-149000 kubelet[3114]: E0717 20:15:28.148682    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:15:42 old-k8s-version-149000 kubelet[3114]: E0717 20:15:42.147173    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:15:54 old-k8s-version-149000 kubelet[3114]: E0717 20:15:54.148160    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:16:08 old-k8s-version-149000 kubelet[3114]: E0717 20:16:08.171304    3114 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 20:16:08 old-k8s-version-149000 kubelet[3114]: E0717 20:16:08.171747    3114 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 20:16:08 old-k8s-version-149000 kubelet[3114]: E0717 20:16:08.172447    3114 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 20:16:08 old-k8s-version-149000 kubelet[3114]: E0717 20:16:08.172715    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jul 17 20:16:21 old-k8s-version-149000 kubelet[3114]: E0717 20:16:21.146954    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:16:34 old-k8s-version-149000 kubelet[3114]: E0717 20:16:34.149150    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:16:49 old-k8s-version-149000 kubelet[3114]: E0717 20:16:49.150894    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:17:00 old-k8s-version-149000 kubelet[3114]: E0717 20:17:00.147113    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:17:12 old-k8s-version-149000 kubelet[3114]: E0717 20:17:12.147201    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:17:26 old-k8s-version-149000 kubelet[3114]: E0717 20:17:26.147060    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 20:17:41 old-k8s-version-149000 kubelet[3114]: E0717 20:17:41.147688    3114 pod_workers.go:191] Error syncing pod 493d4f17-8ddf-4d76-aa86-33fc669de018 ("metrics-server-74d5856cc6-cxzws_kube-system(493d4f17-8ddf-4d76-aa86-33fc669de018)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [bf6835e7df11c2226357f6c241d1255496d2ec1bbff4467041ba7a213fc6a1df] <==
	* I0717 20:05:28.805642       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 20:05:28.829060       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 20:05:28.829157       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 20:05:28.842003       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 20:05:28.842302       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-149000_3b7660e9-00ce-412a-8d74-43e33a1fc1be!
	I0717 20:05:28.844036       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2435fbc8-fa69-40c3-bcfe-3d130ef0c83f", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-149000_3b7660e9-00ce-412a-8d74-43e33a1fc1be became leader
	I0717 20:05:28.943370       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-149000_3b7660e9-00ce-412a-8d74-43e33a1fc1be!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-149000 -n old-k8s-version-149000
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-149000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-cxzws
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-149000 describe pod metrics-server-74d5856cc6-cxzws
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-149000 describe pod metrics-server-74d5856cc6-cxzws: exit status 1 (81.88191ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-cxzws" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-149000 describe pod metrics-server-74d5856cc6-cxzws: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (143.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (290.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 20:18:01.330845 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-114855 -n embed-certs-114855
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-07-17 20:22:48.521766397 +0000 UTC m=+5969.830461953
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-114855 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-114855 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.393µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-114855 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-114855 -n embed-certs-114855
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-114855 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-114855 logs -n 25: (1.511444922s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-395471 sudo                  | bridge-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC |                     |
	|         | systemctl status containerd            |                       |         |         |                     |                     |
	|         | --all --full --no-pager                |                       |         |         |                     |                     |
	| ssh     | -p bridge-395471 sudo                  | bridge-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | systemctl cat containerd               |                       |         |         |                     |                     |
	|         | --no-pager                             |                       |         |         |                     |                     |
	| ssh     | -p bridge-395471 sudo cat              | bridge-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | /lib/systemd/system/containerd.service |                       |         |         |                     |                     |
	| ssh     | -p bridge-395471 sudo cat              | bridge-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | /etc/containerd/config.toml            |                       |         |         |                     |                     |
	| ssh     | -p calico-395471 pgrep -a              | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | kubelet                                |                       |         |         |                     |                     |
	| ssh     | -p bridge-395471 sudo                  | bridge-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | containerd config dump                 |                       |         |         |                     |                     |
	| ssh     | -p bridge-395471 sudo                  | bridge-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | systemctl status crio --all            |                       |         |         |                     |                     |
	|         | --full --no-pager                      |                       |         |         |                     |                     |
	| ssh     | -p bridge-395471 sudo                  | bridge-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | systemctl cat crio --no-pager          |                       |         |         |                     |                     |
	| ssh     | -p bridge-395471 sudo find             | bridge-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | /etc/crio -type f -exec sh -c          |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                       |         |         |                     |                     |
	| ssh     | -p bridge-395471 sudo crio             | bridge-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | config                                 |                       |         |         |                     |                     |
	| delete  | -p bridge-395471                       | bridge-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	| start   | -p custom-flannel-395471               | custom-flannel-395471 | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC |                     |
	|         | --memory=3072 --alsologtostderr        |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m         |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml       |                       |         |         |                     |                     |
	|         | --driver=kvm2                          |                       |         |         |                     |                     |
	|         | --container-runtime=crio               |                       |         |         |                     |                     |
	| ssh     | -p calico-395471 sudo cat              | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | /etc/nsswitch.conf                     |                       |         |         |                     |                     |
	| ssh     | -p calico-395471 sudo cat              | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | /etc/hosts                             |                       |         |         |                     |                     |
	| ssh     | -p calico-395471 sudo cat              | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | /etc/resolv.conf                       |                       |         |         |                     |                     |
	| ssh     | -p calico-395471 sudo crictl           | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | pods                                   |                       |         |         |                     |                     |
	| ssh     | -p calico-395471 sudo crictl           | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | ps --all                               |                       |         |         |                     |                     |
	| ssh     | -p calico-395471 sudo find             | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | /etc/cni -type f -exec sh -c           |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                       |         |         |                     |                     |
	| ssh     | -p calico-395471 sudo ip a s           | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	| ssh     | -p calico-395471 sudo ip r s           | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	| ssh     | -p calico-395471 sudo                  | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | iptables-save                          |                       |         |         |                     |                     |
	| ssh     | -p calico-395471 sudo iptables         | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | -t nat -L -n -v                        |                       |         |         |                     |                     |
	| ssh     | -p calico-395471 sudo                  | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | systemctl status kubelet --all         |                       |         |         |                     |                     |
	|         | --full --no-pager                      |                       |         |         |                     |                     |
	| ssh     | -p calico-395471 sudo                  | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | systemctl cat kubelet                  |                       |         |         |                     |                     |
	|         | --no-pager                             |                       |         |         |                     |                     |
	| ssh     | -p calico-395471 sudo                  | calico-395471         | jenkins | v1.30.1 | 17 Jul 23 20:22 UTC | 17 Jul 23 20:22 UTC |
	|         | journalctl -xeu kubelet --all          |                       |         |         |                     |                     |
	|         | --full --no-pager                      |                       |         |         |                     |                     |
	|---------|----------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 20:22:25
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 20:22:25.693947 1115853 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:22:25.694063 1115853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:22:25.694067 1115853 out.go:309] Setting ErrFile to fd 2...
	I0717 20:22:25.694072 1115853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:22:25.694312 1115853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 20:22:25.694970 1115853 out.go:303] Setting JSON to false
	I0717 20:22:25.696249 1115853 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":18297,"bootTime":1689607049,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 20:22:25.696327 1115853 start.go:138] virtualization: kvm guest
	I0717 20:22:25.700144 1115853 out.go:177] * [custom-flannel-395471] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 20:22:25.703176 1115853 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 20:22:25.703203 1115853 notify.go:220] Checking for updates...
	I0717 20:22:25.705225 1115853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:22:25.707523 1115853 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 20:22:25.709570 1115853 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 20:22:25.711624 1115853 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 20:22:25.713773 1115853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 20:22:25.719518 1115853 config.go:182] Loaded profile config "calico-395471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:22:25.719705 1115853 config.go:182] Loaded profile config "embed-certs-114855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:22:25.719853 1115853 config.go:182] Loaded profile config "kindnet-395471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:22:25.720028 1115853 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 20:22:25.765669 1115853 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 20:22:25.767749 1115853 start.go:298] selected driver: kvm2
	I0717 20:22:25.767772 1115853 start.go:880] validating driver "kvm2" against <nil>
	I0717 20:22:25.767785 1115853 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 20:22:25.768562 1115853 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 20:22:25.768646 1115853 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 20:22:25.786776 1115853 install.go:137] /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2 version is 1.30.1
	I0717 20:22:25.786855 1115853 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 20:22:25.787126 1115853 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 20:22:25.787177 1115853 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0717 20:22:25.787195 1115853 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0717 20:22:25.787210 1115853 start_flags.go:319] config:
	{Name:custom-flannel-395471 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-395471 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:22:25.787410 1115853 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 20:22:25.790237 1115853 out.go:177] * Starting control plane node custom-flannel-395471 in cluster custom-flannel-395471
	I0717 20:22:25.400858 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:25.401335 1114238 main.go:141] libmachine: (kindnet-395471) Found IP for machine: 192.168.72.185
	I0717 20:22:25.401367 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has current primary IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:25.401377 1114238 main.go:141] libmachine: (kindnet-395471) Reserving static IP address...
	I0717 20:22:25.401810 1114238 main.go:141] libmachine: (kindnet-395471) DBG | unable to find host DHCP lease matching {name: "kindnet-395471", mac: "52:54:00:13:a1:93", ip: "192.168.72.185"} in network mk-kindnet-395471
	I0717 20:22:25.496957 1114238 main.go:141] libmachine: (kindnet-395471) DBG | Getting to WaitForSSH function...
	I0717 20:22:25.496992 1114238 main.go:141] libmachine: (kindnet-395471) Reserved static IP address: 192.168.72.185
	I0717 20:22:25.497006 1114238 main.go:141] libmachine: (kindnet-395471) Waiting for SSH to be available...
	I0717 20:22:25.500369 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:25.500971 1114238 main.go:141] libmachine: (kindnet-395471) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471
	I0717 20:22:25.501008 1114238 main.go:141] libmachine: (kindnet-395471) DBG | unable to find defined IP address of network mk-kindnet-395471 interface with MAC address 52:54:00:13:a1:93
	I0717 20:22:25.501161 1114238 main.go:141] libmachine: (kindnet-395471) DBG | Using SSH client type: external
	I0717 20:22:25.501201 1114238 main.go:141] libmachine: (kindnet-395471) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kindnet-395471/id_rsa (-rw-------)
	I0717 20:22:25.501237 1114238 main.go:141] libmachine: (kindnet-395471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kindnet-395471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 20:22:25.501254 1114238 main.go:141] libmachine: (kindnet-395471) DBG | About to run SSH command:
	I0717 20:22:25.501270 1114238 main.go:141] libmachine: (kindnet-395471) DBG | exit 0
	I0717 20:22:25.505541 1114238 main.go:141] libmachine: (kindnet-395471) DBG | SSH cmd err, output: exit status 255: 
	I0717 20:22:25.505619 1114238 main.go:141] libmachine: (kindnet-395471) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 20:22:25.505632 1114238 main.go:141] libmachine: (kindnet-395471) DBG | command : exit 0
	I0717 20:22:25.505641 1114238 main.go:141] libmachine: (kindnet-395471) DBG | err     : exit status 255
	I0717 20:22:25.505655 1114238 main.go:141] libmachine: (kindnet-395471) DBG | output  : 
	I0717 20:22:25.792189 1115853 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 20:22:25.792260 1115853 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 20:22:25.792277 1115853 cache.go:57] Caching tarball of preloaded images
	I0717 20:22:25.792398 1115853 preload.go:174] Found /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 20:22:25.792410 1115853 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 20:22:25.792571 1115853 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/custom-flannel-395471/config.json ...
	I0717 20:22:25.792605 1115853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/custom-flannel-395471/config.json: {Name:mk366a91dba881b2ac8ec42bc8b6b3b5acf6facc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:22:25.792822 1115853 start.go:365] acquiring machines lock for custom-flannel-395471: {Name:mk1fbc5d19c9a63796b02883911e39f1c406ff89 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 20:22:30.639290 1115853 start.go:369] acquired machines lock for "custom-flannel-395471" in 4.846409191s
	I0717 20:22:30.639375 1115853 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-395471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom
-flannel-395471 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 20:22:30.639526 1115853 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 20:22:30.643439 1115853 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 20:22:30.643665 1115853 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/16890-1061725/.minikube/bin/docker-machine-driver-kvm2
	I0717 20:22:30.643741 1115853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 20:22:30.664946 1115853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0717 20:22:30.665482 1115853 main.go:141] libmachine: () Calling .GetVersion
	I0717 20:22:30.666217 1115853 main.go:141] libmachine: Using API Version  1
	I0717 20:22:30.666242 1115853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 20:22:30.666678 1115853 main.go:141] libmachine: () Calling .GetMachineName
	I0717 20:22:30.666892 1115853 main.go:141] libmachine: (custom-flannel-395471) Calling .GetMachineName
	I0717 20:22:30.667070 1115853 main.go:141] libmachine: (custom-flannel-395471) Calling .DriverName
	I0717 20:22:30.667207 1115853 start.go:159] libmachine.API.Create for "custom-flannel-395471" (driver="kvm2")
	I0717 20:22:30.667241 1115853 client.go:168] LocalClient.Create starting
	I0717 20:22:30.667283 1115853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem
	I0717 20:22:30.667334 1115853 main.go:141] libmachine: Decoding PEM data...
	I0717 20:22:30.667358 1115853 main.go:141] libmachine: Parsing certificate...
	I0717 20:22:30.667422 1115853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem
	I0717 20:22:30.667448 1115853 main.go:141] libmachine: Decoding PEM data...
	I0717 20:22:30.667459 1115853 main.go:141] libmachine: Parsing certificate...
	I0717 20:22:30.667479 1115853 main.go:141] libmachine: Running pre-create checks...
	I0717 20:22:30.667488 1115853 main.go:141] libmachine: (custom-flannel-395471) Calling .PreCreateCheck
	I0717 20:22:30.667888 1115853 main.go:141] libmachine: (custom-flannel-395471) Calling .GetConfigRaw
	I0717 20:22:30.668334 1115853 main.go:141] libmachine: Creating machine...
	I0717 20:22:30.668351 1115853 main.go:141] libmachine: (custom-flannel-395471) Calling .Create
	I0717 20:22:30.668525 1115853 main.go:141] libmachine: (custom-flannel-395471) Creating KVM machine...
	I0717 20:22:30.670219 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | found existing default KVM network
	I0717 20:22:30.671897 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:30.671681 1115888 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fc:80:4b} reservation:<nil>}
	I0717 20:22:30.673607 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:30.673467 1115888 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011dfe0}
	I0717 20:22:30.680278 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | trying to create private KVM network mk-custom-flannel-395471 192.168.50.0/24...
	I0717 20:22:28.507182 1114238 main.go:141] libmachine: (kindnet-395471) DBG | Getting to WaitForSSH function...
	I0717 20:22:28.510061 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:28.510487 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:28.510522 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:28.510644 1114238 main.go:141] libmachine: (kindnet-395471) DBG | Using SSH client type: external
	I0717 20:22:28.510674 1114238 main.go:141] libmachine: (kindnet-395471) DBG | Using SSH private key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kindnet-395471/id_rsa (-rw-------)
	I0717 20:22:28.510720 1114238 main.go:141] libmachine: (kindnet-395471) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kindnet-395471/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 20:22:28.510750 1114238 main.go:141] libmachine: (kindnet-395471) DBG | About to run SSH command:
	I0717 20:22:28.510763 1114238 main.go:141] libmachine: (kindnet-395471) DBG | exit 0
	I0717 20:22:28.606623 1114238 main.go:141] libmachine: (kindnet-395471) DBG | SSH cmd err, output: <nil>: 
	I0717 20:22:28.606921 1114238 main.go:141] libmachine: (kindnet-395471) KVM machine creation complete!
	I0717 20:22:28.607251 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetConfigRaw
	I0717 20:22:28.607798 1114238 main.go:141] libmachine: (kindnet-395471) Calling .DriverName
	I0717 20:22:28.608016 1114238 main.go:141] libmachine: (kindnet-395471) Calling .DriverName
	I0717 20:22:28.608202 1114238 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 20:22:28.608220 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetState
	I0717 20:22:28.609756 1114238 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 20:22:28.609779 1114238 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 20:22:28.609789 1114238 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 20:22:28.609800 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHHostname
	I0717 20:22:28.612615 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:28.613092 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:28.613129 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:28.613277 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHPort
	I0717 20:22:28.613506 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:28.613722 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:28.613871 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHUsername
	I0717 20:22:28.614037 1114238 main.go:141] libmachine: Using SSH client type: native
	I0717 20:22:28.614474 1114238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.185 22 <nil> <nil>}
	I0717 20:22:28.614492 1114238 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 20:22:28.749411 1114238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:22:28.749438 1114238 main.go:141] libmachine: Detecting the provisioner...
	I0717 20:22:28.749451 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHHostname
	I0717 20:22:28.752481 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:28.752911 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:28.752954 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:28.753116 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHPort
	I0717 20:22:28.753340 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:28.753539 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:28.753731 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHUsername
	I0717 20:22:28.753939 1114238 main.go:141] libmachine: Using SSH client type: native
	I0717 20:22:28.754561 1114238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.185 22 <nil> <nil>}
	I0717 20:22:28.754588 1114238 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 20:22:28.886804 1114238 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0717 20:22:28.887013 1114238 main.go:141] libmachine: found compatible host: buildroot
	I0717 20:22:28.887036 1114238 main.go:141] libmachine: Provisioning with buildroot...
	I0717 20:22:28.887050 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetMachineName
	I0717 20:22:28.887382 1114238 buildroot.go:166] provisioning hostname "kindnet-395471"
	I0717 20:22:28.887425 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetMachineName
	I0717 20:22:28.887646 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHHostname
	I0717 20:22:28.890953 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:28.891333 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:28.891372 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:28.891688 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHPort
	I0717 20:22:28.891938 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:28.892158 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:28.892362 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHUsername
	I0717 20:22:28.892579 1114238 main.go:141] libmachine: Using SSH client type: native
	I0717 20:22:28.893199 1114238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.185 22 <nil> <nil>}
	I0717 20:22:28.893232 1114238 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-395471 && echo "kindnet-395471" | sudo tee /etc/hostname
	I0717 20:22:29.047409 1114238 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-395471
	
	I0717 20:22:29.047449 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHHostname
	I0717 20:22:29.050547 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:29.050909 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:29.050954 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:29.051064 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHPort
	I0717 20:22:29.051279 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:29.051504 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:29.051663 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHUsername
	I0717 20:22:29.051853 1114238 main.go:141] libmachine: Using SSH client type: native
	I0717 20:22:29.052249 1114238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.185 22 <nil> <nil>}
	I0717 20:22:29.052266 1114238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-395471' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-395471/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-395471' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 20:22:29.202332 1114238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:22:29.202371 1114238 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1061725/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1061725/.minikube}
	I0717 20:22:29.202418 1114238 buildroot.go:174] setting up certificates
	I0717 20:22:29.202439 1114238 provision.go:83] configureAuth start
	I0717 20:22:29.202457 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetMachineName
	I0717 20:22:29.202810 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetIP
	I0717 20:22:29.205938 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:29.206329 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:29.206369 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:29.206585 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHHostname
	I0717 20:22:29.209829 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:29.210269 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:29.210332 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:29.210532 1114238 provision.go:138] copyHostCerts
	I0717 20:22:29.210645 1114238 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem, removing ...
	I0717 20:22:29.210659 1114238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem
	I0717 20:22:29.210750 1114238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.pem (1082 bytes)
	I0717 20:22:29.210862 1114238 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem, removing ...
	I0717 20:22:29.210875 1114238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem
	I0717 20:22:29.210901 1114238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/cert.pem (1123 bytes)
	I0717 20:22:29.210970 1114238 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem, removing ...
	I0717 20:22:29.210978 1114238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem
	I0717 20:22:29.211000 1114238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1061725/.minikube/key.pem (1675 bytes)
	I0717 20:22:29.211067 1114238 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem org=jenkins.kindnet-395471 san=[192.168.72.185 192.168.72.185 localhost 127.0.0.1 minikube kindnet-395471]
	I0717 20:22:29.792402 1114238 provision.go:172] copyRemoteCerts
	I0717 20:22:29.792484 1114238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 20:22:29.792512 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHHostname
	I0717 20:22:29.795456 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:29.795842 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:29.795885 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:29.796016 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHPort
	I0717 20:22:29.796237 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:29.796426 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHUsername
	I0717 20:22:29.796621 1114238 sshutil.go:53] new ssh client: &{IP:192.168.72.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kindnet-395471/id_rsa Username:docker}
	I0717 20:22:29.892141 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 20:22:29.922447 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 20:22:29.948206 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 20:22:29.974428 1114238 provision.go:86] duration metric: configureAuth took 771.970748ms
	I0717 20:22:29.974461 1114238 buildroot.go:189] setting minikube options for container-runtime
	I0717 20:22:29.974643 1114238 config.go:182] Loaded profile config "kindnet-395471": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 20:22:29.974722 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHHostname
	I0717 20:22:29.977882 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:29.978302 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:29.978343 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:29.978484 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHPort
	I0717 20:22:29.978762 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:29.978980 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:29.979188 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHUsername
	I0717 20:22:29.979377 1114238 main.go:141] libmachine: Using SSH client type: native
	I0717 20:22:29.979827 1114238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.185 22 <nil> <nil>}
	I0717 20:22:29.979851 1114238 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 20:22:30.350326 1114238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 20:22:30.350360 1114238 main.go:141] libmachine: Checking connection to Docker...
	I0717 20:22:30.350373 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetURL
	I0717 20:22:30.351811 1114238 main.go:141] libmachine: (kindnet-395471) DBG | Using libvirt version 6000000
	I0717 20:22:30.354496 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.354989 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:30.355032 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.355209 1114238 main.go:141] libmachine: Docker is up and running!
	I0717 20:22:30.355237 1114238 main.go:141] libmachine: Reticulating splines...
	I0717 20:22:30.355245 1114238 client.go:171] LocalClient.Create took 28.968613801s
	I0717 20:22:30.355272 1114238 start.go:167] duration metric: libmachine.API.Create for "kindnet-395471" took 28.968690381s
	I0717 20:22:30.355287 1114238 start.go:300] post-start starting for "kindnet-395471" (driver="kvm2")
	I0717 20:22:30.355302 1114238 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 20:22:30.355330 1114238 main.go:141] libmachine: (kindnet-395471) Calling .DriverName
	I0717 20:22:30.355618 1114238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 20:22:30.355651 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHHostname
	I0717 20:22:30.358598 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.359038 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:30.359100 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.359190 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHPort
	I0717 20:22:30.359443 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:30.359644 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHUsername
	I0717 20:22:30.359822 1114238 sshutil.go:53] new ssh client: &{IP:192.168.72.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kindnet-395471/id_rsa Username:docker}
	I0717 20:22:30.456095 1114238 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 20:22:30.461022 1114238 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 20:22:30.461080 1114238 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/addons for local assets ...
	I0717 20:22:30.461171 1114238 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1061725/.minikube/files for local assets ...
	I0717 20:22:30.461272 1114238 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem -> 10689542.pem in /etc/ssl/certs
	I0717 20:22:30.461391 1114238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 20:22:30.470606 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 20:22:30.497546 1114238 start.go:303] post-start completed in 142.235809ms
	I0717 20:22:30.497629 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetConfigRaw
	I0717 20:22:30.498290 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetIP
	I0717 20:22:30.501201 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.501578 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:30.501616 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.501879 1114238 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/config.json ...
	I0717 20:22:30.502080 1114238 start.go:128] duration metric: createHost completed in 29.137210319s
	I0717 20:22:30.502103 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHHostname
	I0717 20:22:30.505059 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.505397 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:30.505432 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.505579 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHPort
	I0717 20:22:30.505861 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:30.506066 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:30.506282 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHUsername
	I0717 20:22:30.506497 1114238 main.go:141] libmachine: Using SSH client type: native
	I0717 20:22:30.507134 1114238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.185 22 <nil> <nil>}
	I0717 20:22:30.507155 1114238 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 20:22:30.639058 1114238 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689625350.622484535
	
	I0717 20:22:30.639102 1114238 fix.go:206] guest clock: 1689625350.622484535
	I0717 20:22:30.639113 1114238 fix.go:219] Guest: 2023-07-17 20:22:30.622484535 +0000 UTC Remote: 2023-07-17 20:22:30.502092159 +0000 UTC m=+29.290801049 (delta=120.392376ms)
	I0717 20:22:30.639146 1114238 fix.go:190] guest clock delta is within tolerance: 120.392376ms
	I0717 20:22:30.639156 1114238 start.go:83] releasing machines lock for "kindnet-395471", held for 29.274436532s
	I0717 20:22:30.639208 1114238 main.go:141] libmachine: (kindnet-395471) Calling .DriverName
	I0717 20:22:30.639545 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetIP
	I0717 20:22:30.642710 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.643200 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:30.643243 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.643389 1114238 main.go:141] libmachine: (kindnet-395471) Calling .DriverName
	I0717 20:22:30.643972 1114238 main.go:141] libmachine: (kindnet-395471) Calling .DriverName
	I0717 20:22:30.644232 1114238 main.go:141] libmachine: (kindnet-395471) Calling .DriverName
	I0717 20:22:30.644340 1114238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 20:22:30.644411 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHHostname
	I0717 20:22:30.644526 1114238 ssh_runner.go:195] Run: cat /version.json
	I0717 20:22:30.644562 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHHostname
	I0717 20:22:30.647636 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.647921 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.648040 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:30.648075 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.648360 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHPort
	I0717 20:22:30.648435 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:30.648482 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:30.648624 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:30.648758 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHPort
	I0717 20:22:30.648969 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHUsername
	I0717 20:22:30.649024 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHKeyPath
	I0717 20:22:30.649254 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetSSHUsername
	I0717 20:22:30.649268 1114238 sshutil.go:53] new ssh client: &{IP:192.168.72.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kindnet-395471/id_rsa Username:docker}
	I0717 20:22:30.649474 1114238 sshutil.go:53] new ssh client: &{IP:192.168.72.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/kindnet-395471/id_rsa Username:docker}
	W0717 20:22:30.777646 1114238 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 20:22:30.777740 1114238 ssh_runner.go:195] Run: systemctl --version
	I0717 20:22:30.785459 1114238 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 20:22:30.963851 1114238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 20:22:30.970741 1114238 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 20:22:30.970841 1114238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 20:22:30.989176 1114238 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 20:22:30.989208 1114238 start.go:469] detecting cgroup driver to use...
	I0717 20:22:30.989286 1114238 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 20:22:31.005052 1114238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 20:22:31.019912 1114238 docker.go:196] disabling cri-docker service (if available) ...
	I0717 20:22:31.019991 1114238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 20:22:31.034414 1114238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 20:22:31.050408 1114238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 20:22:31.167681 1114238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 20:22:31.296148 1114238 docker.go:212] disabling docker service ...
	I0717 20:22:31.296249 1114238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 20:22:31.313373 1114238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 20:22:31.327134 1114238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 20:22:31.454811 1114238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 20:22:31.598963 1114238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 20:22:31.612223 1114238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 20:22:31.632761 1114238 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 20:22:31.632825 1114238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 20:22:31.644566 1114238 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 20:22:31.644654 1114238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 20:22:31.655403 1114238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 20:22:31.665948 1114238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 20:22:31.677007 1114238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 20:22:31.689711 1114238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 20:22:31.699653 1114238 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 20:22:31.699757 1114238 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 20:22:31.714883 1114238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 20:22:31.726402 1114238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 20:22:31.869537 1114238 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 20:22:32.065639 1114238 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 20:22:32.065735 1114238 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 20:22:32.071618 1114238 start.go:537] Will wait 60s for crictl version
	I0717 20:22:32.071712 1114238 ssh_runner.go:195] Run: which crictl
	I0717 20:22:32.076905 1114238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:22:32.114976 1114238 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 20:22:32.115110 1114238 ssh_runner.go:195] Run: crio --version
	I0717 20:22:32.167313 1114238 ssh_runner.go:195] Run: crio --version
	I0717 20:22:32.233667 1114238 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 20:22:30.783345 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | private KVM network mk-custom-flannel-395471 192.168.50.0/24 created
	I0717 20:22:30.783382 1115853 main.go:141] libmachine: (custom-flannel-395471) Setting up store path in /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/custom-flannel-395471 ...
	I0717 20:22:30.783397 1115853 main.go:141] libmachine: (custom-flannel-395471) Building disk image from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 20:22:30.783416 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:30.783307 1115888 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 20:22:30.783441 1115853 main.go:141] libmachine: (custom-flannel-395471) Downloading /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0717 20:22:31.042477 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:31.042255 1115888 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/custom-flannel-395471/id_rsa...
	I0717 20:22:31.326574 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:31.326403 1115888 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/custom-flannel-395471/custom-flannel-395471.rawdisk...
	I0717 20:22:31.326621 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | Writing magic tar header
	I0717 20:22:31.326641 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | Writing SSH key tar header
	I0717 20:22:31.326654 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:31.326517 1115888 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/custom-flannel-395471 ...
	I0717 20:22:31.326669 1115853 main.go:141] libmachine: (custom-flannel-395471) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/custom-flannel-395471 (perms=drwx------)
	I0717 20:22:31.326691 1115853 main.go:141] libmachine: (custom-flannel-395471) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube/machines (perms=drwxr-xr-x)
	I0717 20:22:31.326701 1115853 main.go:141] libmachine: (custom-flannel-395471) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725/.minikube (perms=drwxr-xr-x)
	I0717 20:22:31.326712 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines/custom-flannel-395471
	I0717 20:22:31.326729 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube/machines
	I0717 20:22:31.326745 1115853 main.go:141] libmachine: (custom-flannel-395471) Setting executable bit set on /home/jenkins/minikube-integration/16890-1061725 (perms=drwxrwxr-x)
	I0717 20:22:31.326756 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 20:22:31.326769 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16890-1061725
	I0717 20:22:31.326783 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 20:22:31.326797 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | Checking permissions on dir: /home/jenkins
	I0717 20:22:31.326809 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | Checking permissions on dir: /home
	I0717 20:22:31.326825 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | Skipping /home - not owner
	I0717 20:22:31.326856 1115853 main.go:141] libmachine: (custom-flannel-395471) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 20:22:31.326889 1115853 main.go:141] libmachine: (custom-flannel-395471) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 20:22:31.326899 1115853 main.go:141] libmachine: (custom-flannel-395471) Creating domain...
	I0717 20:22:31.328250 1115853 main.go:141] libmachine: (custom-flannel-395471) define libvirt domain using xml: 
	I0717 20:22:31.328283 1115853 main.go:141] libmachine: (custom-flannel-395471) <domain type='kvm'>
	I0717 20:22:31.328322 1115853 main.go:141] libmachine: (custom-flannel-395471)   <name>custom-flannel-395471</name>
	I0717 20:22:31.328350 1115853 main.go:141] libmachine: (custom-flannel-395471)   <memory unit='MiB'>3072</memory>
	I0717 20:22:31.328361 1115853 main.go:141] libmachine: (custom-flannel-395471)   <vcpu>2</vcpu>
	I0717 20:22:31.328371 1115853 main.go:141] libmachine: (custom-flannel-395471)   <features>
	I0717 20:22:31.328382 1115853 main.go:141] libmachine: (custom-flannel-395471)     <acpi/>
	I0717 20:22:31.328395 1115853 main.go:141] libmachine: (custom-flannel-395471)     <apic/>
	I0717 20:22:31.328405 1115853 main.go:141] libmachine: (custom-flannel-395471)     <pae/>
	I0717 20:22:31.328416 1115853 main.go:141] libmachine: (custom-flannel-395471)     
	I0717 20:22:31.328428 1115853 main.go:141] libmachine: (custom-flannel-395471)   </features>
	I0717 20:22:31.328439 1115853 main.go:141] libmachine: (custom-flannel-395471)   <cpu mode='host-passthrough'>
	I0717 20:22:31.328451 1115853 main.go:141] libmachine: (custom-flannel-395471)   
	I0717 20:22:31.328462 1115853 main.go:141] libmachine: (custom-flannel-395471)   </cpu>
	I0717 20:22:31.328473 1115853 main.go:141] libmachine: (custom-flannel-395471)   <os>
	I0717 20:22:31.328485 1115853 main.go:141] libmachine: (custom-flannel-395471)     <type>hvm</type>
	I0717 20:22:31.328496 1115853 main.go:141] libmachine: (custom-flannel-395471)     <boot dev='cdrom'/>
	I0717 20:22:31.328522 1115853 main.go:141] libmachine: (custom-flannel-395471)     <boot dev='hd'/>
	I0717 20:22:31.328559 1115853 main.go:141] libmachine: (custom-flannel-395471)     <bootmenu enable='no'/>
	I0717 20:22:31.328576 1115853 main.go:141] libmachine: (custom-flannel-395471)   </os>
	I0717 20:22:31.328591 1115853 main.go:141] libmachine: (custom-flannel-395471)   <devices>
	I0717 20:22:31.328604 1115853 main.go:141] libmachine: (custom-flannel-395471)     <disk type='file' device='cdrom'>
	I0717 20:22:31.328622 1115853 main.go:141] libmachine: (custom-flannel-395471)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/custom-flannel-395471/boot2docker.iso'/>
	I0717 20:22:31.328636 1115853 main.go:141] libmachine: (custom-flannel-395471)       <target dev='hdc' bus='scsi'/>
	I0717 20:22:31.328649 1115853 main.go:141] libmachine: (custom-flannel-395471)       <readonly/>
	I0717 20:22:31.328661 1115853 main.go:141] libmachine: (custom-flannel-395471)     </disk>
	I0717 20:22:31.328677 1115853 main.go:141] libmachine: (custom-flannel-395471)     <disk type='file' device='disk'>
	I0717 20:22:31.328692 1115853 main.go:141] libmachine: (custom-flannel-395471)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 20:22:31.328709 1115853 main.go:141] libmachine: (custom-flannel-395471)       <source file='/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/custom-flannel-395471/custom-flannel-395471.rawdisk'/>
	I0717 20:22:31.328723 1115853 main.go:141] libmachine: (custom-flannel-395471)       <target dev='hda' bus='virtio'/>
	I0717 20:22:31.328736 1115853 main.go:141] libmachine: (custom-flannel-395471)     </disk>
	I0717 20:22:31.328748 1115853 main.go:141] libmachine: (custom-flannel-395471)     <interface type='network'>
	I0717 20:22:31.328769 1115853 main.go:141] libmachine: (custom-flannel-395471)       <source network='mk-custom-flannel-395471'/>
	I0717 20:22:31.328781 1115853 main.go:141] libmachine: (custom-flannel-395471)       <model type='virtio'/>
	I0717 20:22:31.328795 1115853 main.go:141] libmachine: (custom-flannel-395471)     </interface>
	I0717 20:22:31.328810 1115853 main.go:141] libmachine: (custom-flannel-395471)     <interface type='network'>
	I0717 20:22:31.328829 1115853 main.go:141] libmachine: (custom-flannel-395471)       <source network='default'/>
	I0717 20:22:31.328842 1115853 main.go:141] libmachine: (custom-flannel-395471)       <model type='virtio'/>
	I0717 20:22:31.328855 1115853 main.go:141] libmachine: (custom-flannel-395471)     </interface>
	I0717 20:22:31.328866 1115853 main.go:141] libmachine: (custom-flannel-395471)     <serial type='pty'>
	I0717 20:22:31.328879 1115853 main.go:141] libmachine: (custom-flannel-395471)       <target port='0'/>
	I0717 20:22:31.328890 1115853 main.go:141] libmachine: (custom-flannel-395471)     </serial>
	I0717 20:22:31.328900 1115853 main.go:141] libmachine: (custom-flannel-395471)     <console type='pty'>
	I0717 20:22:31.328912 1115853 main.go:141] libmachine: (custom-flannel-395471)       <target type='serial' port='0'/>
	I0717 20:22:31.328925 1115853 main.go:141] libmachine: (custom-flannel-395471)     </console>
	I0717 20:22:31.328937 1115853 main.go:141] libmachine: (custom-flannel-395471)     <rng model='virtio'>
	I0717 20:22:31.328951 1115853 main.go:141] libmachine: (custom-flannel-395471)       <backend model='random'>/dev/random</backend>
	I0717 20:22:31.328963 1115853 main.go:141] libmachine: (custom-flannel-395471)     </rng>
	I0717 20:22:31.328976 1115853 main.go:141] libmachine: (custom-flannel-395471)     
	I0717 20:22:31.328987 1115853 main.go:141] libmachine: (custom-flannel-395471)     
	I0717 20:22:31.329000 1115853 main.go:141] libmachine: (custom-flannel-395471)   </devices>
	I0717 20:22:31.329012 1115853 main.go:141] libmachine: (custom-flannel-395471) </domain>
	I0717 20:22:31.329029 1115853 main.go:141] libmachine: (custom-flannel-395471) 
	I0717 20:22:31.334465 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:6c:1f:e1 in network default
	I0717 20:22:31.335189 1115853 main.go:141] libmachine: (custom-flannel-395471) Ensuring networks are active...
	I0717 20:22:31.335218 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:52:7d:b1 in network mk-custom-flannel-395471
	I0717 20:22:31.336180 1115853 main.go:141] libmachine: (custom-flannel-395471) Ensuring network default is active
	I0717 20:22:31.336569 1115853 main.go:141] libmachine: (custom-flannel-395471) Ensuring network mk-custom-flannel-395471 is active
	I0717 20:22:31.337157 1115853 main.go:141] libmachine: (custom-flannel-395471) Getting domain xml...
	I0717 20:22:31.337947 1115853 main.go:141] libmachine: (custom-flannel-395471) Creating domain...
	I0717 20:22:32.867061 1115853 main.go:141] libmachine: (custom-flannel-395471) Waiting to get IP...
	I0717 20:22:32.868198 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:52:7d:b1 in network mk-custom-flannel-395471
	I0717 20:22:32.868804 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | unable to find current IP address of domain custom-flannel-395471 in network mk-custom-flannel-395471
	I0717 20:22:32.868838 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:32.868798 1115888 retry.go:31] will retry after 248.003776ms: waiting for machine to come up
	I0717 20:22:33.118684 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:52:7d:b1 in network mk-custom-flannel-395471
	I0717 20:22:33.119228 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | unable to find current IP address of domain custom-flannel-395471 in network mk-custom-flannel-395471
	I0717 20:22:33.119260 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:33.119184 1115888 retry.go:31] will retry after 383.555758ms: waiting for machine to come up
	I0717 20:22:33.505027 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:52:7d:b1 in network mk-custom-flannel-395471
	I0717 20:22:33.505899 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | unable to find current IP address of domain custom-flannel-395471 in network mk-custom-flannel-395471
	I0717 20:22:33.505928 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:33.505847 1115888 retry.go:31] will retry after 383.445683ms: waiting for machine to come up
	I0717 20:22:33.891923 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:52:7d:b1 in network mk-custom-flannel-395471
	I0717 20:22:33.893803 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | unable to find current IP address of domain custom-flannel-395471 in network mk-custom-flannel-395471
	I0717 20:22:33.893836 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:33.893741 1115888 retry.go:31] will retry after 440.422305ms: waiting for machine to come up
	I0717 20:22:34.335605 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:52:7d:b1 in network mk-custom-flannel-395471
	I0717 20:22:34.336151 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | unable to find current IP address of domain custom-flannel-395471 in network mk-custom-flannel-395471
	I0717 20:22:34.336183 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:34.336084 1115888 retry.go:31] will retry after 593.593569ms: waiting for machine to come up
	I0717 20:22:34.932124 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:52:7d:b1 in network mk-custom-flannel-395471
	I0717 20:22:34.932735 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | unable to find current IP address of domain custom-flannel-395471 in network mk-custom-flannel-395471
	I0717 20:22:34.932772 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:34.932690 1115888 retry.go:31] will retry after 738.671327ms: waiting for machine to come up
	I0717 20:22:35.673022 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:52:7d:b1 in network mk-custom-flannel-395471
	I0717 20:22:35.673676 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | unable to find current IP address of domain custom-flannel-395471 in network mk-custom-flannel-395471
	I0717 20:22:35.673703 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:35.673630 1115888 retry.go:31] will retry after 988.467805ms: waiting for machine to come up
	I0717 20:22:32.235802 1114238 main.go:141] libmachine: (kindnet-395471) Calling .GetIP
	I0717 20:22:32.239429 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:32.239851 1114238 main.go:141] libmachine: (kindnet-395471) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:a1:93", ip: ""} in network mk-kindnet-395471: {Iface:virbr2 ExpiryTime:2023-07-17 21:22:19 +0000 UTC Type:0 Mac:52:54:00:13:a1:93 Iaid: IPaddr:192.168.72.185 Prefix:24 Hostname:kindnet-395471 Clientid:01:52:54:00:13:a1:93}
	I0717 20:22:32.239885 1114238 main.go:141] libmachine: (kindnet-395471) DBG | domain kindnet-395471 has defined IP address 192.168.72.185 and MAC address 52:54:00:13:a1:93 in network mk-kindnet-395471
	I0717 20:22:32.240212 1114238 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 20:22:32.246252 1114238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:22:32.260373 1114238 localpath.go:92] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/client.crt -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/client.crt
	I0717 20:22:32.260561 1114238 localpath.go:117] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/client.key -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/client.key
	I0717 20:22:32.260689 1114238 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 20:22:32.260767 1114238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:22:32.304459 1114238 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 20:22:32.304555 1114238 ssh_runner.go:195] Run: which lz4
	I0717 20:22:32.309094 1114238 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 20:22:32.313704 1114238 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 20:22:32.313745 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 20:22:34.326082 1114238 crio.go:444] Took 2.017039 seconds to copy over tarball
	I0717 20:22:34.326163 1114238 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 20:22:37.996966 1114238 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.670766552s)
	I0717 20:22:37.997003 1114238 crio.go:451] Took 3.670889 seconds to extract the tarball
	I0717 20:22:37.997017 1114238 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 20:22:38.045918 1114238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:22:38.114365 1114238 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 20:22:38.114389 1114238 cache_images.go:84] Images are preloaded, skipping loading
	I0717 20:22:38.114457 1114238 ssh_runner.go:195] Run: crio config
	I0717 20:22:38.181932 1114238 cni.go:84] Creating CNI manager for "kindnet"
	I0717 20:22:38.181966 1114238 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 20:22:38.181985 1114238 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.185 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-395471 NodeName:kindnet-395471 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 20:22:38.182112 1114238 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-395471"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 20:22:38.182193 1114238 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kindnet-395471 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:kindnet-395471 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0717 20:22:38.182252 1114238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 20:22:38.192983 1114238 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 20:22:38.193071 1114238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 20:22:38.206374 1114238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0717 20:22:38.226030 1114238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 20:22:38.248334 1114238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0717 20:22:38.269270 1114238 ssh_runner.go:195] Run: grep 192.168.72.185	control-plane.minikube.internal$ /etc/hosts
	I0717 20:22:38.274299 1114238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:22:38.288837 1114238 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471 for IP: 192.168.72.185
	I0717 20:22:38.288876 1114238 certs.go:190] acquiring lock for shared ca certs: {Name:mkdfbc203d7bd874aff2f1c4e5c3352251804e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:22:38.289053 1114238 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key
	I0717 20:22:38.289163 1114238 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key
	I0717 20:22:38.289287 1114238 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/client.key
	I0717 20:22:38.289314 1114238 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/apiserver.key.bd3ca874
	I0717 20:22:38.289335 1114238 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/apiserver.crt.bd3ca874 with IP's: [192.168.72.185 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 20:22:38.711182 1114238 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/apiserver.crt.bd3ca874 ...
	I0717 20:22:38.711220 1114238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/apiserver.crt.bd3ca874: {Name:mkd270b4048b99c7e95dca5c1601ed654b02c66f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:22:38.711423 1114238 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/apiserver.key.bd3ca874 ...
	I0717 20:22:38.711442 1114238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/apiserver.key.bd3ca874: {Name:mk9147f5ad36ebd56b470dff9aa2d95e50771b32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:22:38.711540 1114238 certs.go:337] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/apiserver.crt.bd3ca874 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/apiserver.crt
	I0717 20:22:38.711629 1114238 certs.go:341] copying /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/apiserver.key.bd3ca874 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/apiserver.key
	I0717 20:22:38.711705 1114238 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/proxy-client.key
	I0717 20:22:38.711727 1114238 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/proxy-client.crt with IP's: []
	I0717 20:22:38.783724 1114238 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/proxy-client.crt ...
	I0717 20:22:38.783765 1114238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/proxy-client.crt: {Name:mk9f22cd2a4a5b450a814bf873af255ca39b1ada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:22:38.784054 1114238 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/proxy-client.key ...
	I0717 20:22:38.784075 1114238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/proxy-client.key: {Name:mk954a8fb4006833bafac48654f42e32efd98c7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:22:38.784303 1114238 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem (1338 bytes)
	W0717 20:22:38.784364 1114238 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954_empty.pem, impossibly tiny 0 bytes
	I0717 20:22:38.784387 1114238 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 20:22:38.784424 1114238 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/ca.pem (1082 bytes)
	I0717 20:22:38.784467 1114238 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/cert.pem (1123 bytes)
	I0717 20:22:38.784518 1114238 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/certs/key.pem (1675 bytes)
	I0717 20:22:38.784587 1114238 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem (1708 bytes)
	I0717 20:22:38.785194 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 20:22:38.818442 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 20:22:38.850635 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 20:22:38.882819 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/kindnet-395471/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 20:22:38.914798 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 20:22:38.946870 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 20:22:38.977089 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 20:22:39.008307 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 20:22:39.039112 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/ssl/certs/10689542.pem --> /usr/share/ca-certificates/10689542.pem (1708 bytes)
	I0717 20:22:39.069504 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 20:22:39.099086 1114238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1061725/.minikube/certs/1068954.pem --> /usr/share/ca-certificates/1068954.pem (1338 bytes)
	I0717 20:22:39.126694 1114238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 20:22:39.147257 1114238 ssh_runner.go:195] Run: openssl version
	I0717 20:22:39.155660 1114238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10689542.pem && ln -fs /usr/share/ca-certificates/10689542.pem /etc/ssl/certs/10689542.pem"
	I0717 20:22:39.170419 1114238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10689542.pem
	I0717 20:22:39.177708 1114238 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:52 /usr/share/ca-certificates/10689542.pem
	I0717 20:22:39.177785 1114238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10689542.pem
	I0717 20:22:39.184900 1114238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10689542.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 20:22:39.198639 1114238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 20:22:39.211617 1114238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:22:39.218009 1114238 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:22:39.218098 1114238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:22:39.225830 1114238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 20:22:39.241664 1114238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1068954.pem && ln -fs /usr/share/ca-certificates/1068954.pem /etc/ssl/certs/1068954.pem"
	I0717 20:22:39.253862 1114238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1068954.pem
	I0717 20:22:39.260736 1114238 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:52 /usr/share/ca-certificates/1068954.pem
	I0717 20:22:39.260842 1114238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1068954.pem
	I0717 20:22:39.268120 1114238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1068954.pem /etc/ssl/certs/51391683.0"
	I0717 20:22:39.280277 1114238 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 20:22:39.285828 1114238 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 20:22:39.285894 1114238 kubeadm.go:404] StartCluster: {Name:kindnet-395471 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-395471 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.185 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:22:39.286021 1114238 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 20:22:39.286084 1114238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 20:22:39.324558 1114238 cri.go:89] found id: ""
	I0717 20:22:39.324648 1114238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 20:22:39.335570 1114238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:22:39.346777 1114238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:22:39.358665 1114238 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:22:39.358724 1114238 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 20:22:39.425420 1114238 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 20:22:39.425514 1114238 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:22:39.573031 1114238 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:22:39.573217 1114238 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:22:39.573364 1114238 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:22:39.770837 1114238 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:22:36.664050 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:52:7d:b1 in network mk-custom-flannel-395471
	I0717 20:22:36.664630 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | unable to find current IP address of domain custom-flannel-395471 in network mk-custom-flannel-395471
	I0717 20:22:36.664662 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:36.664529 1115888 retry.go:31] will retry after 1.115756597s: waiting for machine to come up
	I0717 20:22:37.781872 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:52:7d:b1 in network mk-custom-flannel-395471
	I0717 20:22:37.782475 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | unable to find current IP address of domain custom-flannel-395471 in network mk-custom-flannel-395471
	I0717 20:22:37.782508 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:37.782412 1115888 retry.go:31] will retry after 1.635164934s: waiting for machine to come up
	I0717 20:22:39.419745 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:52:7d:b1 in network mk-custom-flannel-395471
	I0717 20:22:39.420464 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | unable to find current IP address of domain custom-flannel-395471 in network mk-custom-flannel-395471
	I0717 20:22:39.420499 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:39.420392 1115888 retry.go:31] will retry after 2.316643891s: waiting for machine to come up
	I0717 20:22:39.773627 1114238 out.go:204]   - Generating certificates and keys ...
	I0717 20:22:39.773777 1114238 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:22:39.773893 1114238 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:22:40.021067 1114238 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 20:22:40.243036 1114238 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 20:22:40.440328 1114238 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 20:22:40.520421 1114238 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 20:22:40.689362 1114238 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 20:22:40.689697 1114238 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kindnet-395471 localhost] and IPs [192.168.72.185 127.0.0.1 ::1]
	I0717 20:22:40.842772 1114238 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 20:22:40.842990 1114238 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kindnet-395471 localhost] and IPs [192.168.72.185 127.0.0.1 ::1]
	I0717 20:22:41.101423 1114238 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 20:22:41.211163 1114238 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 20:22:41.654867 1114238 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 20:22:41.655612 1114238 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:22:42.036377 1114238 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:22:42.220684 1114238 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:22:42.350793 1114238 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:22:42.599031 1114238 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:22:42.625460 1114238 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:22:42.627383 1114238 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:22:42.628258 1114238 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 20:22:42.793492 1114238 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:22:41.738844 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:52:7d:b1 in network mk-custom-flannel-395471
	I0717 20:22:41.739405 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | unable to find current IP address of domain custom-flannel-395471 in network mk-custom-flannel-395471
	I0717 20:22:41.739448 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:41.739358 1115888 retry.go:31] will retry after 2.132392702s: waiting for machine to come up
	I0717 20:22:43.874041 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | domain custom-flannel-395471 has defined MAC address 52:54:00:52:7d:b1 in network mk-custom-flannel-395471
	I0717 20:22:43.874584 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | unable to find current IP address of domain custom-flannel-395471 in network mk-custom-flannel-395471
	I0717 20:22:43.874604 1115853 main.go:141] libmachine: (custom-flannel-395471) DBG | I0717 20:22:43.874543 1115888 retry.go:31] will retry after 2.465051602s: waiting for machine to come up
	I0717 20:22:42.795992 1114238 out.go:204]   - Booting up control plane ...
	I0717 20:22:42.796121 1114238 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:22:42.797094 1114238 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:22:42.798013 1114238 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:22:42.799027 1114238 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:22:42.803152 1114238 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 19:58:50 UTC, ends at Mon 2023-07-17 20:22:49 UTC. --
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.180013224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=61ab2c7a-1b40-47b6-8dc1-a8df7f047821 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.180289860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=61ab2c7a-1b40-47b6-8dc1-a8df7f047821 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.230227440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3338c47a-e607-44d2-bb41-0b9156d00bcc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.230624535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3338c47a-e607-44d2-bb41-0b9156d00bcc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.230847270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3338c47a-e607-44d2-bb41-0b9156d00bcc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.285661494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b2ae303d-243b-4fcc-a553-9e225418199e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.285732734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b2ae303d-243b-4fcc-a553-9e225418199e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.285957611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b2ae303d-243b-4fcc-a553-9e225418199e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.325274447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5c3210f3-4d65-4408-8887-1802c92a8785 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.325343117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5c3210f3-4d65-4408-8887-1802c92a8785 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.325546072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5c3210f3-4d65-4408-8887-1802c92a8785 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.368956651Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0cd14f63-f726-44b5-a444-a8d0fbf3d303 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.369048484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0cd14f63-f726-44b5-a444-a8d0fbf3d303 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.369406704Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0cd14f63-f726-44b5-a444-a8d0fbf3d303 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.413808149Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=aa7be42d-d7b3-45fb-a292-ce9ebbc59ea8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.414613770Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:994ec0db-08aa-4dd5-a137-1f6984051e65,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624283948793677,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-07-17T20:04:43.301910477Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a8b693e844590cf8e81069ce717d47fad82fa1f98dbcf2db6a505aa96d011933,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5c6b9c-jvfz8,Uid:f861e320-9125-4081-b043-c90d8b027f71,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624283503057283,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5c6b9c-jvfz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f861e320-9125-4081-b043-c90d8b027f71,
k8s-app: metrics-server,pod-template-hash: 74d5c6b9c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T20:04:43.158502751Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-gq2b2,Uid:833e67fa-16e2-4a5c-8c39-16cc4fbd411e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624281169070004,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T20:04:40.801980588Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&PodSandboxMetadata{Name:kube-proxy-bfvnl,Uid:6f7fb55d-fa9f-4d08-b4ab-3814a
f550c01,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624281098764435,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T20:04:40.757679348Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-114855,Uid:6e7dce0dd54044c5bead23f2309aa88d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624257820120605,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0dd54044c5bead23f2309a
a88d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.213:8443,kubernetes.io/config.hash: 6e7dce0dd54044c5bead23f2309aa88d,kubernetes.io/config.seen: 2023-07-17T20:04:17.267807063Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-114855,Uid:57c1c5fe39a9ad0e8adcb474b4dff169,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624257812237247,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 57c1c5fe39a9ad0e8adcb474b4dff169,kubernetes.io/config.seen: 2023-07-17T20:04:17.267800332Z,kube
rnetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-114855,Uid:3c2e3fe9483a42bbcf2012a6138b250f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624257792495948,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.213:2379,kubernetes.io/config.hash: 3c2e3fe9483a42bbcf2012a6138b250f,kubernetes.io/config.seen: 2023-07-17T20:04:17.267805900Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-114855,Uid:849a8d0dccd58b0d4de1642f30453709,Namespac
e:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689624257766141422,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 849a8d0dccd58b0d4de1642f30453709,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 849a8d0dccd58b0d4de1642f30453709,kubernetes.io/config.seen: 2023-07-17T20:04:17.267804893Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=aa7be42d-d7b3-45fb-a292-ce9ebbc59ea8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.415920215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7c163de0-7e7c-46af-acda-91ba9c2734a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.415994527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7c163de0-7e7c-46af-acda-91ba9c2734a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.416313022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7c163de0-7e7c-46af-acda-91ba9c2734a0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.435776328Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=204dcb15-aeb5-4e1f-8062-60b987c14c90 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.435938008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=204dcb15-aeb5-4e1f-8062-60b987c14c90 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.436233023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=204dcb15-aeb5-4e1f-8062-60b987c14c90 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.491864502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=49bed31c-ef30-44d7-b282-9001a8735b64 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.491965777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=49bed31c-ef30-44d7-b282-9001a8735b64 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 20:22:49 embed-certs-114855 crio[715]: time="2023-07-17 20:22:49.492333677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea,PodSandboxId:1f0c0b79b31dfe68fbef1f2e595035de7d6f678ba3f6166057bde811a03d123b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689624284905264525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994ec0db-08aa-4dd5-a137-1f6984051e65,},Annotations:map[string]string{io.kubernetes.container.hash: bd992592,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39,PodSandboxId:c274ffc0c7fe9f52c77da4556ab4878886fd82e4b6e445a226c7995dcdf174a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689624284651614217,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bfvnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7fb55d-fa9f-4d08-b4ab-3814af550c01,},Annotations:map[string]string{io.kubernetes.container.hash: 90992ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e,PodSandboxId:9ee7541fb51d98c7de4c5e3005a44c354168d4ea64d4b82a1bfe852a9204f7b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689624283530546443,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-gq2b2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 833e67fa-16e2-4a5c-8c39-16cc4fbd411e,},Annotations:map[string]string{io.kubernetes.container.hash: 59fee763,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52,PodSandboxId:9ecfd0a904e7ebaab5c4dfb2cde59de457bca69f6a8f61b737dc42bde218ec51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689624258869292359,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 849a8d0dccd58b0d4de1642f30453709,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f,PodSandboxId:76dee47a73691a4fd9d500e9e7e7fd151efc0aa9b6aeb0dd463f77986074ee12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689624258812734812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7dce0d
d54044c5bead23f2309aa88d,},Annotations:map[string]string{io.kubernetes.container.hash: 933764ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd,PodSandboxId:4b187605bba1119a66742eb26f6c6c069e2ef67327cae9799de79fe23d7b7efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689624258522563148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 57c1c5fe39a9ad0e8adcb474b4dff169,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8,PodSandboxId:bc879944812a23892188d86cddd6c4fd833d4ae559d9adb2a227c64a0e660c02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689624258349028545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-114855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2e3fe9483a42bbcf2012a6138b250f
,},Annotations:map[string]string{io.kubernetes.container.hash: c61456bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=49bed31c-ef30-44d7-b282-9001a8735b64 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	1f09aa9710f96       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       0                   1f0c0b79b31df
	c3094a9649f15       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   18 minutes ago      Running             kube-proxy                0                   c274ffc0c7fe9
	9edc839c4e8e9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 minutes ago      Running             coredns                   0                   9ee7541fb51d9
	20ad6b7297313       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   18 minutes ago      Running             kube-scheduler            2                   9ecfd0a904e7e
	b983a08dbeafc       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   18 minutes ago      Running             kube-apiserver            2                   76dee47a73691
	7a8fd7290abfe       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   18 minutes ago      Running             kube-controller-manager   2                   4b187605bba11
	6f2263eee0373       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   18 minutes ago      Running             etcd                      2                   bc879944812a2
	
	* 
	* ==> coredns [9edc839c4e8e944009647c39b071d26cf80905a78b00a3790ecde7bf8d9a4b2e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:35475 - 12124 "HINFO IN 5559246197945730497.3557320093662157327. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011324204s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-114855
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-114855
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=embed-certs-114855
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T20_04_27_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 20:04:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-114855
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 20:22:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 20:20:05 +0000   Mon, 17 Jul 2023 20:04:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 20:20:05 +0000   Mon, 17 Jul 2023 20:04:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 20:20:05 +0000   Mon, 17 Jul 2023 20:04:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 20:20:05 +0000   Mon, 17 Jul 2023 20:04:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.213
	  Hostname:    embed-certs-114855
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 467d878487bd48a9aeb3f4254d204a95
	  System UUID:                467d8784-87bd-48a9-aeb3-f4254d204a95
	  Boot ID:                    c8d572fc-29b3-45e1-abc8-5f78d915cd39
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-gq2b2                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-embed-certs-114855                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-embed-certs-114855             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-embed-certs-114855    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-bfvnl                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-embed-certs-114855             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 metrics-server-74d5c6b9c-jvfz8                100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-114855 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-114855 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-114855 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node embed-certs-114855 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node embed-certs-114855 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node embed-certs-114855 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             18m                kubelet          Node embed-certs-114855 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                18m                kubelet          Node embed-certs-114855 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-114855 event: Registered Node embed-certs-114855 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul17 19:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076625] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.548955] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.742032] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.171411] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.610597] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul17 19:59] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.182963] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.243175] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.158852] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.269689] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +18.991145] systemd-fstab-generator[917]: Ignoring "noauto" for root device
	[ +19.344202] kauditd_printk_skb: 29 callbacks suppressed
	[Jul17 20:04] systemd-fstab-generator[3562]: Ignoring "noauto" for root device
	[ +10.338751] systemd-fstab-generator[3890]: Ignoring "noauto" for root device
	[ +22.668487] kauditd_printk_skb: 7 callbacks suppressed
	[Jul17 20:21] hrtimer: interrupt took 4105679 ns
	
	* 
	* ==> etcd [6f2263eee0373c7de4ad436ad141ff04701702f71d148c95071e973de50a56e8] <==
	* {"level":"info","ts":"2023-07-17T20:18:25.626Z","caller":"traceutil/trace.go:171","msg":"trace[1228921089] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"108.002304ms","start":"2023-07-17T20:18:25.517Z","end":"2023-07-17T20:18:25.625Z","steps":["trace[1228921089] 'process raft request'  (duration: 107.362865ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T20:18:25.920Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.060208ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9250263228476319370 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:005f896574306689>","response":"size:39"}
	{"level":"info","ts":"2023-07-17T20:18:25.920Z","caller":"traceutil/trace.go:171","msg":"trace[1820368934] linearizableReadLoop","detail":"{readStateIndex:1347; appliedIndex:1346; }","duration":"228.010069ms","start":"2023-07-17T20:18:25.692Z","end":"2023-07-17T20:18:25.920Z","steps":["trace[1820368934] 'read index received'  (duration: 66.571403ms)","trace[1820368934] 'applied index is now lower than readState.Index'  (duration: 161.437727ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T20:18:25.920Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.119508ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T20:18:25.920Z","caller":"traceutil/trace.go:171","msg":"trace[1383322982] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1163; }","duration":"228.147156ms","start":"2023-07-17T20:18:25.692Z","end":"2023-07-17T20:18:25.920Z","steps":["trace[1383322982] 'agreement among raft nodes before linearized reading'  (duration: 228.061731ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T20:18:26.273Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.207318ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:481"}
	{"level":"info","ts":"2023-07-17T20:18:26.273Z","caller":"traceutil/trace.go:171","msg":"trace[149611325] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:1164; }","duration":"251.310812ms","start":"2023-07-17T20:18:26.022Z","end":"2023-07-17T20:18:26.273Z","steps":["trace[149611325] 'range keys from in-memory index tree'  (duration: 250.853034ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T20:18:26.961Z","caller":"traceutil/trace.go:171","msg":"trace[1101508797] transaction","detail":"{read_only:false; response_revision:1165; number_of_response:1; }","duration":"109.906822ms","start":"2023-07-17T20:18:26.851Z","end":"2023-07-17T20:18:26.961Z","steps":["trace[1101508797] 'process raft request'  (duration: 109.03207ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T20:18:55.389Z","caller":"traceutil/trace.go:171","msg":"trace[389475561] transaction","detail":"{read_only:false; response_revision:1186; number_of_response:1; }","duration":"121.646531ms","start":"2023-07-17T20:18:55.268Z","end":"2023-07-17T20:18:55.389Z","steps":["trace[389475561] 'process raft request'  (duration: 121.42201ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T20:18:56.067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.117459ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9250263228476319523 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.213\" mod_revision:1179 > success:<request_put:<key:\"/registry/masterleases/192.168.39.213\" value_size:67 lease:26891191621543712 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.213\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-07-17T20:18:56.067Z","caller":"traceutil/trace.go:171","msg":"trace[513940006] linearizableReadLoop","detail":"{readStateIndex:1377; appliedIndex:1376; }","duration":"158.599768ms","start":"2023-07-17T20:18:55.908Z","end":"2023-07-17T20:18:56.067Z","steps":["trace[513940006] 'read index received'  (duration: 11.669467ms)","trace[513940006] 'applied index is now lower than readState.Index'  (duration: 146.929307ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T20:18:56.067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.737075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T20:18:56.067Z","caller":"traceutil/trace.go:171","msg":"trace[1823840785] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1187; }","duration":"158.794089ms","start":"2023-07-17T20:18:55.908Z","end":"2023-07-17T20:18:56.067Z","steps":["trace[1823840785] 'agreement among raft nodes before linearized reading'  (duration: 158.636444ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T20:18:56.067Z","caller":"traceutil/trace.go:171","msg":"trace[866523284] transaction","detail":"{read_only:false; response_revision:1187; number_of_response:1; }","duration":"277.30218ms","start":"2023-07-17T20:18:55.790Z","end":"2023-07-17T20:18:56.067Z","steps":["trace[866523284] 'process raft request'  (duration: 130.312321ms)","trace[866523284] 'compare'  (duration: 146.003986ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T20:18:56.780Z","caller":"traceutil/trace.go:171","msg":"trace[1742090471] transaction","detail":"{read_only:false; response_revision:1188; number_of_response:1; }","duration":"346.935326ms","start":"2023-07-17T20:18:56.433Z","end":"2023-07-17T20:18:56.780Z","steps":["trace[1742090471] 'process raft request'  (duration: 341.188229ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T20:18:56.791Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T20:18:56.433Z","time spent":"357.908257ms","remote":"127.0.0.1:47796","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":561,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-114855\" mod_revision:1180 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-114855\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-114855\" > >"}
	{"level":"info","ts":"2023-07-17T20:19:21.409Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":965}
	{"level":"info","ts":"2023-07-17T20:19:21.411Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":965,"took":"1.760053ms","hash":1969879029}
	{"level":"info","ts":"2023-07-17T20:19:21.411Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1969879029,"revision":965,"compact-revision":722}
	{"level":"info","ts":"2023-07-17T20:20:45.795Z","caller":"traceutil/trace.go:171","msg":"trace[298547328] linearizableReadLoop","detail":"{readStateIndex:1491; appliedIndex:1490; }","duration":"102.648774ms","start":"2023-07-17T20:20:45.692Z","end":"2023-07-17T20:20:45.795Z","steps":["trace[298547328] 'read index received'  (duration: 50.645083ms)","trace[298547328] 'applied index is now lower than readState.Index'  (duration: 52.002392ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T20:20:45.795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.958132ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T20:20:45.795Z","caller":"traceutil/trace.go:171","msg":"trace[122030916] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1277; }","duration":"103.092655ms","start":"2023-07-17T20:20:45.692Z","end":"2023-07-17T20:20:45.795Z","steps":["trace[122030916] 'agreement among raft nodes before linearized reading'  (duration: 102.866752ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T20:20:45.796Z","caller":"traceutil/trace.go:171","msg":"trace[786504569] transaction","detail":"{read_only:false; response_revision:1277; number_of_response:1; }","duration":"186.279441ms","start":"2023-07-17T20:20:45.609Z","end":"2023-07-17T20:20:45.796Z","steps":["trace[786504569] 'process raft request'  (duration: 133.698022ms)","trace[786504569] 'compare'  (duration: 51.556614ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T20:21:55.268Z","caller":"traceutil/trace.go:171","msg":"trace[53202705] transaction","detail":"{read_only:false; response_revision:1333; number_of_response:1; }","duration":"241.417965ms","start":"2023-07-17T20:21:55.027Z","end":"2023-07-17T20:21:55.268Z","steps":["trace[53202705] 'process raft request'  (duration: 241.22247ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T20:22:37.727Z","caller":"traceutil/trace.go:171","msg":"trace[45709801] transaction","detail":"{read_only:false; response_revision:1368; number_of_response:1; }","duration":"119.316104ms","start":"2023-07-17T20:22:37.608Z","end":"2023-07-17T20:22:37.727Z","steps":["trace[45709801] 'process raft request'  (duration: 118.894971ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  20:22:50 up 24 min,  0 users,  load average: 0.18, 0.24, 0.25
	Linux embed-certs-114855 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b983a08dbeafca614f97da9fceb27311f666efeae88106fd5c7b56564c0f530f] <==
	* E0717 20:19:24.400117       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:19:24.400261       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0717 20:19:24.400131       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:19:24.401690       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:20:23.281911       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.97.242:443: connect: connection refused
	I0717 20:20:23.282401       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 20:20:24.401593       1 handler_proxy.go:100] no RequestInfo found in the context
	W0717 20:20:24.401829       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:20:24.401897       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:20:24.401960       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0717 20:20:24.402030       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:20:24.403899       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:21:23.281882       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.97.242:443: connect: connection refused
	I0717 20:21:23.281955       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 20:22:23.282773       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.108.97.242:443: connect: connection refused
	I0717 20:22:23.282877       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 20:22:24.403604       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:22:24.403681       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 20:22:24.403698       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 20:22:24.404640       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 20:22:24.404743       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:22:24.404752       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7a8fd7290abfec0485b69f438c56aa8738af795b749543e5b3e7181aaa255bbd] <==
	* W0717 20:16:40.363068       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:17:09.821705       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:17:10.372424       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:17:39.829294       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:17:40.381755       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:18:09.837012       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:18:10.392795       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:18:39.845089       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:18:40.404801       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:19:09.850249       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:19:10.417714       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:19:39.856884       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:19:40.429975       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:20:09.867720       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:20:10.442836       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:20:39.876985       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:20:40.474843       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:21:09.883763       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:21:10.488900       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:21:39.890146       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:21:40.499811       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:22:09.900980       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:22:10.515924       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 20:22:39.922934       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 20:22:40.529964       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [c3094a9649f154ac4cae90e4f9c8c9baf9bea4051add259f2318a6528ec34c39] <==
	* I0717 20:04:45.185454       1 node.go:141] Successfully retrieved node IP: 192.168.39.213
	I0717 20:04:45.185652       1 server_others.go:110] "Detected node IP" address="192.168.39.213"
	I0717 20:04:45.185725       1 server_others.go:554] "Using iptables proxy"
	I0717 20:04:45.247298       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 20:04:45.247359       1 server_others.go:192] "Using iptables Proxier"
	I0717 20:04:45.248158       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 20:04:45.249489       1 server.go:658] "Version info" version="v1.27.3"
	I0717 20:04:45.249671       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 20:04:45.252764       1 config.go:188] "Starting service config controller"
	I0717 20:04:45.253755       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 20:04:45.253971       1 config.go:315] "Starting node config controller"
	I0717 20:04:45.253982       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 20:04:45.254580       1 config.go:97] "Starting endpoint slice config controller"
	I0717 20:04:45.254636       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 20:04:45.357432       1 shared_informer.go:318] Caches are synced for node config
	I0717 20:04:45.357491       1 shared_informer.go:318] Caches are synced for service config
	I0717 20:04:45.357584       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [20ad6b729731380bc142c0e98ceded726b67d29c002f245e941461b6f106da52] <==
	* W0717 20:04:24.271567       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 20:04:24.271689       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 20:04:24.276112       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 20:04:24.276365       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 20:04:24.302713       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 20:04:24.302804       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 20:04:24.326540       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 20:04:24.326661       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 20:04:24.344213       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 20:04:24.344267       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 20:04:24.476620       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 20:04:24.476674       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 20:04:24.507367       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 20:04:24.507422       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 20:04:24.526763       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 20:04:24.526933       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 20:04:24.569899       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 20:04:24.570022       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 20:04:24.755359       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 20:04:24.755453       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 20:04:24.779120       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 20:04:24.779316       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 20:04:24.859302       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 20:04:24.859447       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0717 20:04:26.511621       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 19:58:50 UTC, ends at Mon 2023-07-17 20:22:50 UTC. --
	Jul 17 20:20:27 embed-certs-114855 kubelet[3897]: E0717 20:20:27.261324    3897 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:20:27 embed-certs-114855 kubelet[3897]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:20:27 embed-certs-114855 kubelet[3897]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:20:27 embed-certs-114855 kubelet[3897]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:20:38 embed-certs-114855 kubelet[3897]: E0717 20:20:38.152833    3897 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 20:20:38 embed-certs-114855 kubelet[3897]: E0717 20:20:38.153006    3897 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 20:20:38 embed-certs-114855 kubelet[3897]: E0717 20:20:38.153411    3897 kuberuntime_manager.go:1212] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hxz6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod metrics-server-74d5c6b9c-jvfz8_kube-system(f861e320-9125-4081-b043-c90d8b027f71): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 20:20:38 embed-certs-114855 kubelet[3897]: E0717 20:20:38.153464    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:20:50 embed-certs-114855 kubelet[3897]: E0717 20:20:50.131633    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:21:02 embed-certs-114855 kubelet[3897]: E0717 20:21:02.130644    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:21:15 embed-certs-114855 kubelet[3897]: E0717 20:21:15.132437    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:21:27 embed-certs-114855 kubelet[3897]: E0717 20:21:27.257796    3897 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:21:27 embed-certs-114855 kubelet[3897]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:21:27 embed-certs-114855 kubelet[3897]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:21:27 embed-certs-114855 kubelet[3897]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:21:30 embed-certs-114855 kubelet[3897]: E0717 20:21:30.131511    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:21:45 embed-certs-114855 kubelet[3897]: E0717 20:21:45.132639    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:22:00 embed-certs-114855 kubelet[3897]: E0717 20:22:00.133026    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:22:12 embed-certs-114855 kubelet[3897]: E0717 20:22:12.132268    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:22:25 embed-certs-114855 kubelet[3897]: E0717 20:22:25.132528    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	Jul 17 20:22:27 embed-certs-114855 kubelet[3897]: E0717 20:22:27.259960    3897 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 20:22:27 embed-certs-114855 kubelet[3897]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 20:22:27 embed-certs-114855 kubelet[3897]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 20:22:27 embed-certs-114855 kubelet[3897]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 20:22:38 embed-certs-114855 kubelet[3897]: E0717 20:22:38.132030    3897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-jvfz8" podUID=f861e320-9125-4081-b043-c90d8b027f71
	
	* 
	* ==> storage-provisioner [1f09aa9710f96e1c9a26555d019312ac5b9995829009e31064589d25e60acaea] <==
	* I0717 20:04:45.064087       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 20:04:45.105889       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 20:04:45.106146       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 20:04:45.122633       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 20:04:45.124944       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-114855_e52dd33a-980c-4403-ba62-ffac53a0b460!
	I0717 20:04:45.134847       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b98ed5b9-8007-47ec-b3ec-aa2586e849ab", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-114855_e52dd33a-980c-4403-ba62-ffac53a0b460 became leader
	I0717 20:04:45.235341       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-114855_e52dd33a-980c-4403-ba62-ffac53a0b460!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-114855 -n embed-certs-114855
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-114855 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-jvfz8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-114855 describe pod metrics-server-74d5c6b9c-jvfz8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-114855 describe pod metrics-server-74d5c6b9c-jvfz8: exit status 1 (91.560904ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-jvfz8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-114855 describe pod metrics-server-74d5c6b9c-jvfz8: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (290.23s)

                                                
                                    

Test pass (225/288)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.22
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.27.3/json-events 4.69
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.15
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.59
20 TestOffline 105.23
22 TestAddons/Setup 148.34
24 TestAddons/parallel/Registry 16.3
26 TestAddons/parallel/InspektorGadget 11.48
27 TestAddons/parallel/MetricsServer 6.54
28 TestAddons/parallel/HelmTiller 14.9
30 TestAddons/parallel/CSI 53.99
31 TestAddons/parallel/Headlamp 15.95
32 TestAddons/parallel/CloudSpanner 6.32
35 TestAddons/serial/GCPAuth/Namespaces 0.14
37 TestCertOptions 81.56
38 TestCertExpiration 303.04
40 TestForceSystemdFlag 84.02
41 TestForceSystemdEnv 55.41
43 TestKVMDriverInstallOrUpdate 2.17
48 TestErrorSpam/start 0.38
49 TestErrorSpam/status 0.77
50 TestErrorSpam/pause 1.53
51 TestErrorSpam/unpause 1.7
52 TestErrorSpam/stop 2.25
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 101.09
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 138.98
59 TestFunctional/serial/KubeContext 0.05
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.28
64 TestFunctional/serial/CacheCmd/cache/add_local 1.08
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
66 TestFunctional/serial/CacheCmd/cache/list 0.05
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.77
69 TestFunctional/serial/CacheCmd/cache/delete 0.1
70 TestFunctional/serial/MinikubeKubectlCmd 0.12
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
72 TestFunctional/serial/ExtraConfig 57.3
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.53
75 TestFunctional/serial/LogsFileCmd 1.49
76 TestFunctional/serial/InvalidService 4.49
78 TestFunctional/parallel/ConfigCmd 0.36
79 TestFunctional/parallel/DashboardCmd 32.84
80 TestFunctional/parallel/DryRun 0.36
81 TestFunctional/parallel/InternationalLanguage 0.16
82 TestFunctional/parallel/StatusCmd 1.22
86 TestFunctional/parallel/ServiceCmdConnect 13.83
87 TestFunctional/parallel/AddonsCmd 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 54.52
90 TestFunctional/parallel/SSHCmd 0.47
91 TestFunctional/parallel/CpCmd 1.22
92 TestFunctional/parallel/MySQL 31.7
93 TestFunctional/parallel/FileSync 0.25
94 TestFunctional/parallel/CertSync 1.58
98 TestFunctional/parallel/NodeLabels 0.07
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
102 TestFunctional/parallel/License 0.17
103 TestFunctional/parallel/Version/short 0.05
104 TestFunctional/parallel/Version/components 0.6
105 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
115 TestFunctional/parallel/ProfileCmd/profile_list 0.35
116 TestFunctional/parallel/ServiceCmd/DeployApp 13.3
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
118 TestFunctional/parallel/MountCmd/any-port 11.2
119 TestFunctional/parallel/MountCmd/specific-port 2.2
120 TestFunctional/parallel/ServiceCmd/List 0.34
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
123 TestFunctional/parallel/MountCmd/VerifyCleanup 1.38
124 TestFunctional/parallel/ServiceCmd/Format 0.41
125 TestFunctional/parallel/ServiceCmd/URL 0.36
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
130 TestFunctional/parallel/ImageCommands/ImageBuild 2.78
131 TestFunctional/parallel/ImageCommands/Setup 1.11
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 7.17
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.93
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.39
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.45
139 TestFunctional/parallel/ImageCommands/ImageRemove 1.69
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.29
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.84
142 TestFunctional/delete_addon-resizer_images 0.08
143 TestFunctional/delete_my-image_image 0.02
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestIngressAddonLegacy/StartLegacyK8sCluster 111.05
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.7
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.7
155 TestJSONOutput/start/Command 100.62
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.67
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.69
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 17.12
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.22
183 TestMainNoArgs 0.05
184 TestMinikubeProfile 101.56
187 TestMountStart/serial/StartWithMountFirst 29.96
188 TestMountStart/serial/VerifyMountFirst 0.41
189 TestMountStart/serial/StartWithMountSecond 30.83
190 TestMountStart/serial/VerifyMountSecond 0.4
191 TestMountStart/serial/DeleteFirst 0.9
192 TestMountStart/serial/VerifyMountPostDelete 0.42
193 TestMountStart/serial/Stop 1.23
194 TestMountStart/serial/RestartStopped 22.68
195 TestMountStart/serial/VerifyMountPostStop 0.4
198 TestMultiNode/serial/FreshStart2Nodes 117.15
199 TestMultiNode/serial/DeployApp2Nodes 5.03
201 TestMultiNode/serial/AddNode 41.15
202 TestMultiNode/serial/ProfileList 0.23
203 TestMultiNode/serial/CopyFile 7.74
204 TestMultiNode/serial/StopNode 3
205 TestMultiNode/serial/StartAfterStop 33.47
207 TestMultiNode/serial/DeleteNode 1.87
209 TestMultiNode/serial/RestartMultiNode 447.31
210 TestMultiNode/serial/ValidateNameConflict 50.71
217 TestScheduledStopUnix 120.76
223 TestKubernetesUpgrade 234.19
226 TestPause/serial/Start 94.88
227 TestStoppedBinaryUpgrade/Setup 0.45
238 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
239 TestNoKubernetes/serial/StartWithK8s 53.68
247 TestNetworkPlugins/group/false 3.31
251 TestNoKubernetes/serial/StartWithStopK8s 63.02
252 TestStoppedBinaryUpgrade/MinikubeLogs 0.46
253 TestNoKubernetes/serial/Start 74.24
255 TestStartStop/group/old-k8s-version/serial/FirstStart 149.79
257 TestStartStop/group/no-preload/serial/FirstStart 149.93
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
259 TestNoKubernetes/serial/ProfileList 0.76
260 TestNoKubernetes/serial/Stop 1.31
261 TestNoKubernetes/serial/StartNoArgs 73.63
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
264 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 106.07
265 TestStartStop/group/old-k8s-version/serial/DeployApp 8.61
266 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
269 TestStartStop/group/newest-cni/serial/FirstStart 63.06
270 TestStartStop/group/no-preload/serial/DeployApp 9.57
271 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.4
273 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.5
274 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.39
276 TestStartStop/group/newest-cni/serial/DeployApp 0
277 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.71
278 TestStartStop/group/newest-cni/serial/Stop 12.12
279 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
280 TestStartStop/group/newest-cni/serial/SecondStart 51.59
281 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
282 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
283 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
284 TestStartStop/group/newest-cni/serial/Pause 2.5
286 TestStartStop/group/embed-certs/serial/FirstStart 103.86
288 TestStartStop/group/old-k8s-version/serial/SecondStart 803.73
290 TestStartStop/group/no-preload/serial/SecondStart 596.52
292 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 565.82
293 TestStartStop/group/embed-certs/serial/DeployApp 9.51
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.35
297 TestStartStop/group/embed-certs/serial/SecondStart 702.63
305 TestNetworkPlugins/group/auto/Start 104.02
307 TestNetworkPlugins/group/flannel/Start 90.04
308 TestNetworkPlugins/group/enable-default-cni/Start 112.31
309 TestNetworkPlugins/group/auto/KubeletFlags 0.22
310 TestNetworkPlugins/group/auto/NetCatPod 12.44
311 TestNetworkPlugins/group/flannel/ControllerPod 5.03
312 TestNetworkPlugins/group/auto/DNS 0.22
313 TestNetworkPlugins/group/auto/Localhost 0.21
314 TestNetworkPlugins/group/auto/HairPin 0.2
315 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
316 TestNetworkPlugins/group/flannel/NetCatPod 11.48
317 TestNetworkPlugins/group/flannel/DNS 0.21
318 TestNetworkPlugins/group/flannel/Localhost 0.18
319 TestNetworkPlugins/group/flannel/HairPin 0.2
320 TestNetworkPlugins/group/bridge/Start 106.74
321 TestNetworkPlugins/group/calico/Start 112.72
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.49
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
327 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
328 TestNetworkPlugins/group/bridge/NetCatPod 13.76
329 TestNetworkPlugins/group/kindnet/Start 77.39
330 TestNetworkPlugins/group/bridge/DNS 0.24
331 TestNetworkPlugins/group/bridge/Localhost 0.18
332 TestNetworkPlugins/group/bridge/HairPin 0.17
333 TestNetworkPlugins/group/calico/ControllerPod 5.03
334 TestNetworkPlugins/group/calico/KubeletFlags 0.24
335 TestNetworkPlugins/group/calico/NetCatPod 12.58
336 TestNetworkPlugins/group/custom-flannel/Start 89.56
337 TestNetworkPlugins/group/calico/DNS 0.34
338 TestNetworkPlugins/group/calico/Localhost 0.25
339 TestNetworkPlugins/group/calico/HairPin 0.22
340 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
341 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
342 TestNetworkPlugins/group/kindnet/NetCatPod 12.42
343 TestNetworkPlugins/group/kindnet/DNS 0.2
344 TestNetworkPlugins/group/kindnet/Localhost 0.18
345 TestNetworkPlugins/group/kindnet/HairPin 0.16
346 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
347 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.46
348 TestNetworkPlugins/group/custom-flannel/DNS 0.19
349 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
350 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (7.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-435458 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-435458 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.219224998s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-435458
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-435458: exit status 85 (68.883773ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-435458 | jenkins | v1.30.1 | 17 Jul 23 18:43 UTC |          |
	|         | -p download-only-435458        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 18:43:18
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:43:18.771235 1068966 out.go:296] Setting OutFile to fd 1 ...
	I0717 18:43:18.771451 1068966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:43:18.771461 1068966 out.go:309] Setting ErrFile to fd 2...
	I0717 18:43:18.771466 1068966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:43:18.771664 1068966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	W0717 18:43:18.771807 1068966 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16890-1061725/.minikube/config/config.json: open /home/jenkins/minikube-integration/16890-1061725/.minikube/config/config.json: no such file or directory
	I0717 18:43:18.772419 1068966 out.go:303] Setting JSON to true
	I0717 18:43:18.773956 1068966 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":12350,"bootTime":1689607049,"procs":719,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:43:18.774029 1068966 start.go:138] virtualization: kvm guest
	I0717 18:43:18.777340 1068966 out.go:97] [download-only-435458] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	W0717 18:43:18.777490 1068966 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 18:43:18.779374 1068966 out.go:169] MINIKUBE_LOCATION=16890
	I0717 18:43:18.777585 1068966 notify.go:220] Checking for updates...
	I0717 18:43:18.782774 1068966 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:43:18.784731 1068966 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 18:43:18.786772 1068966 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 18:43:18.788727 1068966 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 18:43:18.792504 1068966 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 18:43:18.792784 1068966 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 18:43:18.828592 1068966 out.go:97] Using the kvm2 driver based on user configuration
	I0717 18:43:18.828637 1068966 start.go:298] selected driver: kvm2
	I0717 18:43:18.828645 1068966 start.go:880] validating driver "kvm2" against <nil>
	I0717 18:43:18.829017 1068966 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:43:18.829133 1068966 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16890-1061725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:43:18.845790 1068966 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0717 18:43:18.845852 1068966 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 18:43:18.846315 1068966 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 18:43:18.846514 1068966 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 18:43:18.846550 1068966 cni.go:84] Creating CNI manager for ""
	I0717 18:43:18.846562 1068966 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:43:18.846569 1068966 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 18:43:18.846579 1068966 start_flags.go:319] config:
	{Name:download-only-435458 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-435458 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 18:43:18.846801 1068966 iso.go:125] acquiring lock: {Name:mk4c08fdde891ec9d41b144ca36ec57b2da32175 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:43:18.849505 1068966 out.go:97] Downloading VM boot image ...
	I0717 18:43:18.849582 1068966 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 18:43:21.185663 1068966 out.go:97] Starting control plane node download-only-435458 in cluster download-only-435458
	I0717 18:43:21.185698 1068966 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 18:43:21.206248 1068966 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0717 18:43:21.206300 1068966 cache.go:57] Caching tarball of preloaded images
	I0717 18:43:21.206486 1068966 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 18:43:21.208983 1068966 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0717 18:43:21.209026 1068966 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:43:21.239181 1068966 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/16890-1061725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-435458"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (4.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-435458 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-435458 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.692362877s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (4.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-435458
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-435458: exit status 85 (69.094956ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-435458 | jenkins | v1.30.1 | 17 Jul 23 18:43 UTC |          |
	|         | -p download-only-435458        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-435458 | jenkins | v1.30.1 | 17 Jul 23 18:43 UTC |          |
	|         | -p download-only-435458        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 18:43:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:43:26.059909 1069022 out.go:296] Setting OutFile to fd 1 ...
	I0717 18:43:26.060035 1069022 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:43:26.060047 1069022 out.go:309] Setting ErrFile to fd 2...
	I0717 18:43:26.060052 1069022 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:43:26.060295 1069022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	W0717 18:43:26.060443 1069022 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16890-1061725/.minikube/config/config.json: open /home/jenkins/minikube-integration/16890-1061725/.minikube/config/config.json: no such file or directory
	I0717 18:43:26.060939 1069022 out.go:303] Setting JSON to true
	I0717 18:43:26.062465 1069022 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":12357,"bootTime":1689607049,"procs":715,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:43:26.062550 1069022 start.go:138] virtualization: kvm guest
	I0717 18:43:26.065503 1069022 out.go:97] [download-only-435458] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:43:26.067701 1069022 out.go:169] MINIKUBE_LOCATION=16890
	I0717 18:43:26.065771 1069022 notify.go:220] Checking for updates...
	I0717 18:43:26.071744 1069022 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:43:26.073885 1069022 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 18:43:26.075923 1069022 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 18:43:26.077870 1069022 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-435458"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-435458
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-877913 --alsologtostderr --binary-mirror http://127.0.0.1:45753 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-877913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-877913
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (105.23s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-814891 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-814891 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m44.087425052s)
helpers_test.go:175: Cleaning up "offline-crio-814891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-814891
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-814891: (1.147021047s)
--- PASS: TestOffline (105.23s)

                                                
                                    
x
+
TestAddons/Setup (148.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-962955 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-962955 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m28.337044964s)
--- PASS: TestAddons/Setup (148.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 25.768808ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qgwcj" [a58688b7-a416-473a-8314-6cd11129080a] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.028071492s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-glc8x" [e1e6ea98-4e93-499c-982c-fe125d4fb16d] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011703963s
addons_test.go:316: (dbg) Run:  kubectl --context addons-962955 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-962955 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-962955 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.070574727s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-962955 ip
2023/07/17 18:46:15 [DEBUG] GET http://192.168.39.215:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-962955 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.30s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.48s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9hk5m" [18ce9419-056b-40a5-b06b-108dd55b7ec2] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.01665964s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-962955
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-962955: (6.460759426s)
--- PASS: TestAddons/parallel/InspektorGadget (11.48s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 18.288861ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-gvfk5" [04b104ed-620e-4c2c-835f-4817b395d35b] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.037698804s
addons_test.go:391: (dbg) Run:  kubectl --context addons-962955 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-962955 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-962955 addons disable metrics-server --alsologtostderr -v=1: (1.36310585s)
--- PASS: TestAddons/parallel/MetricsServer (6.54s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.9s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 26.182087ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-gk8xr" [838657fc-cef5-4ca0-8c2b-73dcac62920b] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.014271842s
addons_test.go:449: (dbg) Run:  kubectl --context addons-962955 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-962955 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.983973575s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-962955 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 27.641061ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-962955 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-962955 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d1999c43-563d-48f3-b678-72fd542fa7b1] Pending
helpers_test.go:344: "task-pv-pod" [d1999c43-563d-48f3-b678-72fd542fa7b1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d1999c43-563d-48f3-b678-72fd542fa7b1] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.010524033s
addons_test.go:560: (dbg) Run:  kubectl --context addons-962955 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-962955 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-962955 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-962955 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-962955 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-962955 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-962955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-962955 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d9ec61bd-a912-4894-8377-d44d381c5714] Pending
helpers_test.go:344: "task-pv-pod-restore" [d9ec61bd-a912-4894-8377-d44d381c5714] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d9ec61bd-a912-4894-8377-d44d381c5714] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.02087181s
addons_test.go:602: (dbg) Run:  kubectl --context addons-962955 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-962955 delete pod task-pv-pod-restore: (1.32445573s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-962955 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-962955 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-962955 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-962955 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.988437979s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-962955 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:618: (dbg) Done: out/minikube-linux-amd64 -p addons-962955 addons disable volumesnapshots --alsologtostderr -v=1: (1.032812878s)
--- PASS: TestAddons/parallel/CSI (53.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-962955 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-962955 --alsologtostderr -v=1: (1.924255176s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-nld6z" [ee8c7553-ddfd-4186-9c43-d21610f26972] Pending
helpers_test.go:344: "headlamp-66f6498c69-nld6z" [ee8c7553-ddfd-4186-9c43-d21610f26972] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-nld6z" [ee8c7553-ddfd-4186-9c43-d21610f26972] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.021773746s
--- PASS: TestAddons/parallel/Headlamp (15.95s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-sv7gn" [903ebef2-4cb5-48fd-86ed-d92cb153000f] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.016363823s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-962955
addons_test.go:836: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-962955: (1.2788537s)
--- PASS: TestAddons/parallel/CloudSpanner (6.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-962955 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-962955 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestCertOptions (81.56s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-964775 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-964775 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m20.00277042s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-964775 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-964775 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-964775 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-964775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-964775
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-964775: (1.048358124s)
--- PASS: TestCertOptions (81.56s)

                                                
                                    
x
+
TestCertExpiration (303.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-771494 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0717 19:45:43.184037 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-771494 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m40.480033373s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-771494 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-771494 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (21.506590027s)
helpers_test.go:175: Cleaning up "cert-expiration-771494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-771494
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-771494: (1.056585192s)
--- PASS: TestCertExpiration (303.04s)

                                                
                                    
x
+
TestForceSystemdFlag (84.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-499214 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-499214 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.74561022s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-499214 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-499214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-499214
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-499214: (1.059444956s)
--- PASS: TestForceSystemdFlag (84.02s)

                                                
                                    
x
+
TestForceSystemdEnv (55.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-444988 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-444988 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (54.340366279s)
helpers_test.go:175: Cleaning up "force-systemd-env-444988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-444988
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-444988: (1.070305565s)
--- PASS: TestForceSystemdEnv (55.41s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.17s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (2.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 stop: (2.09123126s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-217504 --log_dir /tmp/nospam-217504 stop
--- PASS: TestErrorSpam/stop (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16890-1061725/.minikube/files/etc/test/nested/copy/1068954/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (101.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-685960 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-685960 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m41.086942051s)
--- PASS: TestFunctional/serial/StartWithProxy (101.09s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (138.98s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-685960 --alsologtostderr -v=8
E0717 18:56:00.134215 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 18:56:00.140351 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 18:56:00.150762 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 18:56:00.171237 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 18:56:00.211654 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 18:56:00.292034 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 18:56:00.452654 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 18:56:00.773410 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 18:56:01.414618 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 18:56:02.695643 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 18:56:05.256386 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 18:56:10.377416 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 18:56:20.618450 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 18:56:41.098812 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-685960 --alsologtostderr -v=8: (2m18.980422061s)
functional_test.go:659: soft start took 2m18.981266172s for "functional-685960" cluster.
--- PASS: TestFunctional/serial/SoftStart (138.98s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-685960 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 cache add registry.k8s.io/pause:3.1: (1.035724919s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 cache add registry.k8s.io/pause:3.3: (1.055461117s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 cache add registry.k8s.io/pause:latest: (1.185102462s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-685960 /tmp/TestFunctionalserialCacheCmdcacheadd_local1720787388/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 cache add minikube-local-cache-test:functional-685960
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 cache delete minikube-local-cache-test:functional-685960
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-685960
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-685960 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (225.208421ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 cache reload: (1.045486812s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 kubectl -- --context functional-685960 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-685960 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (57.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-685960 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0717 18:57:22.060027 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-685960 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (57.302195721s)
functional_test.go:757: restart took 57.302343929s for "functional-685960" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (57.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-685960 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 logs: (1.525969705s)
--- PASS: TestFunctional/serial/LogsCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 logs --file /tmp/TestFunctionalserialLogsFileCmd957978660/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 logs --file /tmp/TestFunctionalserialLogsFileCmd957978660/001/logs.txt: (1.488347064s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.49s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-685960 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-685960
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-685960: exit status 115 (330.530483ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.154:30495 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-685960 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-685960 config get cpus: exit status 14 (58.615301ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-685960 config get cpus: exit status 14 (53.536487ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (32.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-685960 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-685960 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1076250: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (32.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-685960 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-685960 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (175.379287ms)

                                                
                                                
-- stdout --
	* [functional-685960] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:58:17.104973 1075895 out.go:296] Setting OutFile to fd 1 ...
	I0717 18:58:17.105168 1075895 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:58:17.105180 1075895 out.go:309] Setting ErrFile to fd 2...
	I0717 18:58:17.105186 1075895 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:58:17.105514 1075895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 18:58:17.106333 1075895 out.go:303] Setting JSON to false
	I0717 18:58:17.107648 1075895 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13248,"bootTime":1689607049,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:58:17.107728 1075895 start.go:138] virtualization: kvm guest
	I0717 18:58:17.110938 1075895 out.go:177] * [functional-685960] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:58:17.113064 1075895 notify.go:220] Checking for updates...
	I0717 18:58:17.113077 1075895 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 18:58:17.115010 1075895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:58:17.116975 1075895 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 18:58:17.119207 1075895 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 18:58:17.121732 1075895 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:58:17.124044 1075895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:58:17.126451 1075895 config.go:182] Loaded profile config "functional-685960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 18:58:17.126919 1075895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:58:17.127000 1075895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:58:17.145000 1075895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0717 18:58:17.145395 1075895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:58:17.146096 1075895 main.go:141] libmachine: Using API Version  1
	I0717 18:58:17.146127 1075895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:58:17.146529 1075895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:58:17.146800 1075895 main.go:141] libmachine: (functional-685960) Calling .DriverName
	I0717 18:58:17.147163 1075895 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 18:58:17.147464 1075895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:58:17.147504 1075895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:58:17.163407 1075895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39595
	I0717 18:58:17.163871 1075895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:58:17.164543 1075895 main.go:141] libmachine: Using API Version  1
	I0717 18:58:17.164572 1075895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:58:17.165101 1075895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:58:17.165412 1075895 main.go:141] libmachine: (functional-685960) Calling .DriverName
	I0717 18:58:17.211209 1075895 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:58:17.213605 1075895 start.go:298] selected driver: kvm2
	I0717 18:58:17.213633 1075895 start.go:880] validating driver "kvm2" against &{Name:functional-685960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-685
960 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.154 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 18:58:17.213832 1075895 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:58:17.217357 1075895 out.go:177] 
	W0717 18:58:17.220051 1075895 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 18:58:17.222157 1075895 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-685960 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-685960 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-685960 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (156.336617ms)

                                                
                                                
-- stdout --
	* [functional-685960] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:58:02.104290 1074790 out.go:296] Setting OutFile to fd 1 ...
	I0717 18:58:02.104785 1074790 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:58:02.104804 1074790 out.go:309] Setting ErrFile to fd 2...
	I0717 18:58:02.104811 1074790 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:58:02.105552 1074790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 18:58:02.106906 1074790 out.go:303] Setting JSON to false
	I0717 18:58:02.108009 1074790 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13233,"bootTime":1689607049,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:58:02.108086 1074790 start.go:138] virtualization: kvm guest
	I0717 18:58:02.110861 1074790 out.go:177] * [functional-685960] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	I0717 18:58:02.113449 1074790 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 18:58:02.115205 1074790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:58:02.113581 1074790 notify.go:220] Checking for updates...
	I0717 18:58:02.117168 1074790 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 18:58:02.119370 1074790 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 18:58:02.121381 1074790 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:58:02.123423 1074790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:58:02.125947 1074790 config.go:182] Loaded profile config "functional-685960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 18:58:02.126318 1074790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:58:02.126391 1074790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:58:02.142168 1074790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0717 18:58:02.142647 1074790 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:58:02.143386 1074790 main.go:141] libmachine: Using API Version  1
	I0717 18:58:02.143417 1074790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:58:02.143869 1074790 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:58:02.144133 1074790 main.go:141] libmachine: (functional-685960) Calling .DriverName
	I0717 18:58:02.144432 1074790 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 18:58:02.144754 1074790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:58:02.144793 1074790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:58:02.160910 1074790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35235
	I0717 18:58:02.161382 1074790 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:58:02.162068 1074790 main.go:141] libmachine: Using API Version  1
	I0717 18:58:02.162101 1074790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:58:02.162519 1074790 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:58:02.162895 1074790 main.go:141] libmachine: (functional-685960) Calling .DriverName
	I0717 18:58:02.199481 1074790 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0717 18:58:02.201353 1074790 start.go:298] selected driver: kvm2
	I0717 18:58:02.201378 1074790 start.go:880] validating driver "kvm2" against &{Name:functional-685960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-685
960 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.154 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 18:58:02.201525 1074790 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:58:02.204409 1074790 out.go:177] 
	W0717 18:58:02.206388 1074790 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 18:58:02.208562 1074790 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-685960 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-685960 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-ctrvn" [4a494709-5207-4ea4-aa8b-3f4fc7158a7c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-ctrvn" [4a494709-5207-4ea4-aa8b-3f4fc7158a7c] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.027190803s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.154:31512
functional_test.go:1674: http://192.168.50.154:31512: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-ctrvn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.154:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.154:31512
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.83s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (54.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [637c8170-beac-4879-9900-b80a060d3ab5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.022703264s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-685960 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-685960 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-685960 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-685960 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-685960 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1362aa11-b714-4cbc-8686-90ebb5dc1dc9] Pending
helpers_test.go:344: "sp-pod" [1362aa11-b714-4cbc-8686-90ebb5dc1dc9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1362aa11-b714-4cbc-8686-90ebb5dc1dc9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.023654359s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-685960 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-685960 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-685960 delete -f testdata/storage-provisioner/pod.yaml: (1.979620828s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-685960 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a9397a06-ded2-49c9-a427-e2fedd86c49b] Pending
helpers_test.go:344: "sp-pod" [a9397a06-ded2-49c9-a427-e2fedd86c49b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a9397a06-ded2-49c9-a427-e2fedd86c49b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.018410543s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-685960 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (54.52s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh -n functional-685960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 cp functional-685960:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1266771736/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh -n functional-685960 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-685960 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-99ffn" [ddae4097-ceea-4c1d-9f62-27612df1cb2d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-99ffn" [ddae4097-ceea-4c1d-9f62-27612df1cb2d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.020587543s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-685960 exec mysql-7db894d786-99ffn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-685960 exec mysql-7db894d786-99ffn -- mysql -ppassword -e "show databases;": exit status 1 (467.959555ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-685960 exec mysql-7db894d786-99ffn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-685960 exec mysql-7db894d786-99ffn -- mysql -ppassword -e "show databases;": exit status 1 (422.474006ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-685960 exec mysql-7db894d786-99ffn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.70s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1068954/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "sudo cat /etc/test/nested/copy/1068954/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1068954.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "sudo cat /etc/ssl/certs/1068954.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1068954.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "sudo cat /usr/share/ca-certificates/1068954.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/10689542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "sudo cat /etc/ssl/certs/10689542.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/10689542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "sudo cat /usr/share/ca-certificates/10689542.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-685960 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-685960 ssh "sudo systemctl is-active docker": exit status 1 (241.850414ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-685960 ssh "sudo systemctl is-active containerd": exit status 1 (219.055ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "287.93479ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "63.767718ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-685960 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-685960 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-gm8b9" [4c1b1b00-d6d9-420b-8047-498188d37ad1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-gm8b9" [4c1b1b00-d6d9-420b-8047-498188d37ad1] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.015285252s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "239.608485ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "49.005878ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-685960 /tmp/TestFunctionalparallelMountCmdany-port3934746263/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689620282217249461" to /tmp/TestFunctionalparallelMountCmdany-port3934746263/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689620282217249461" to /tmp/TestFunctionalparallelMountCmdany-port3934746263/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689620282217249461" to /tmp/TestFunctionalparallelMountCmdany-port3934746263/001/test-1689620282217249461
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-685960 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (205.39423ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 18:58 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 18:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 18:58 test-1689620282217249461
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh cat /mount-9p/test-1689620282217249461
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-685960 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [18b18fb2-07c0-438a-861a-5270accb61ba] Pending
helpers_test.go:344: "busybox-mount" [18b18fb2-07c0-438a-861a-5270accb61ba] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [18b18fb2-07c0-438a-861a-5270accb61ba] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [18b18fb2-07c0-438a-861a-5270accb61ba] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.021645146s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-685960 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-685960 /tmp/TestFunctionalparallelMountCmdany-port3934746263/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-685960 /tmp/TestFunctionalparallelMountCmdspecific-port511195111/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-685960 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (241.786003ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-685960 /tmp/TestFunctionalparallelMountCmdspecific-port511195111/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-685960 ssh "sudo umount -f /mount-9p": exit status 1 (256.352728ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-685960 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-685960 /tmp/TestFunctionalparallelMountCmdspecific-port511195111/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 service list -o json
functional_test.go:1493: Took "366.855367ms" to run "out/minikube-linux-amd64 -p functional-685960 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.154:31319
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-685960 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1350973215/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-685960 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1350973215/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-685960 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1350973215/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-685960 ssh "findmnt -T" /mount1: exit status 1 (311.718736ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-685960 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-685960 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1350973215/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-685960 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1350973215/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-685960 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1350973215/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.154:31319
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-685960 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-685960
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-685960
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-685960 image ls --format short --alsologtostderr:
I0717 18:58:53.699967 1076919 out.go:296] Setting OutFile to fd 1 ...
I0717 18:58:53.700116 1076919 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:58:53.700155 1076919 out.go:309] Setting ErrFile to fd 2...
I0717 18:58:53.700169 1076919 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:58:53.700442 1076919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
I0717 18:58:53.701105 1076919 config.go:182] Loaded profile config "functional-685960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:58:53.701221 1076919 config.go:182] Loaded profile config "functional-685960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:58:53.701597 1076919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:58:53.701668 1076919 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:58:53.718174 1076919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42127
I0717 18:58:53.718799 1076919 main.go:141] libmachine: () Calling .GetVersion
I0717 18:58:53.719594 1076919 main.go:141] libmachine: Using API Version  1
I0717 18:58:53.719645 1076919 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:58:53.720021 1076919 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:58:53.720251 1076919 main.go:141] libmachine: (functional-685960) Calling .GetState
I0717 18:58:53.722431 1076919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:58:53.722484 1076919 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:58:53.738716 1076919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35861
I0717 18:58:53.739222 1076919 main.go:141] libmachine: () Calling .GetVersion
I0717 18:58:53.739770 1076919 main.go:141] libmachine: Using API Version  1
I0717 18:58:53.739790 1076919 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:58:53.740163 1076919 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:58:53.740355 1076919 main.go:141] libmachine: (functional-685960) Calling .DriverName
I0717 18:58:53.740666 1076919 ssh_runner.go:195] Run: systemctl --version
I0717 18:58:53.740704 1076919 main.go:141] libmachine: (functional-685960) Calling .GetSSHHostname
I0717 18:58:53.744423 1076919 main.go:141] libmachine: (functional-685960) DBG | domain functional-685960 has defined MAC address 52:54:00:72:3b:56 in network mk-functional-685960
I0717 18:58:53.744851 1076919 main.go:141] libmachine: (functional-685960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:3b:56", ip: ""} in network mk-functional-685960: {Iface:virbr1 ExpiryTime:2023-07-17 19:53:05 +0000 UTC Type:0 Mac:52:54:00:72:3b:56 Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:functional-685960 Clientid:01:52:54:00:72:3b:56}
I0717 18:58:53.744885 1076919 main.go:141] libmachine: (functional-685960) DBG | domain functional-685960 has defined IP address 192.168.50.154 and MAC address 52:54:00:72:3b:56 in network mk-functional-685960
I0717 18:58:53.745126 1076919 main.go:141] libmachine: (functional-685960) Calling .GetSSHPort
I0717 18:58:53.745332 1076919 main.go:141] libmachine: (functional-685960) Calling .GetSSHKeyPath
I0717 18:58:53.745618 1076919 main.go:141] libmachine: (functional-685960) Calling .GetSSHUsername
I0717 18:58:53.745779 1076919 sshutil.go:53] new ssh client: &{IP:192.168.50.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/functional-685960/id_rsa Username:docker}
I0717 18:58:53.845479 1076919 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 18:58:53.897184 1076919 main.go:141] libmachine: Making call to close driver server
I0717 18:58:53.897214 1076919 main.go:141] libmachine: (functional-685960) Calling .Close
I0717 18:58:53.897584 1076919 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:58:53.897618 1076919 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 18:58:53.897630 1076919 main.go:141] libmachine: Making call to close driver server
I0717 18:58:53.897640 1076919 main.go:141] libmachine: (functional-685960) Calling .Close
I0717 18:58:53.897923 1076919 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:58:53.897942 1076919 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-685960 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/google-containers/addon-resizer  | functional-685960  | ffd4cfbbe753e | 34.1MB |
| localhost/minikube-local-cache-test     | functional-685960  | 476b4809e33d0 | 3.35kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 86b6af7dd652c | 297MB  |
| registry.k8s.io/kube-controller-manager | v1.27.3            | 7cffc01dba0e1 | 114MB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-proxy              | v1.27.3            | 5780543258cf0 | 72.7MB |
| registry.k8s.io/kube-scheduler          | v1.27.3            | 41697ceeb70b3 | 59.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/mysql                 | 5.7                | 2be84dd575ee2 | 588MB  |
| docker.io/library/nginx                 | latest             | 021283c8eb95b | 191MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.27.3            | 08a0c939e61b7 | 122MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-685960 image ls --format table --alsologtostderr:
I0717 18:58:53.981353 1076997 out.go:296] Setting OutFile to fd 1 ...
I0717 18:58:53.981481 1076997 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:58:53.981492 1076997 out.go:309] Setting ErrFile to fd 2...
I0717 18:58:53.981499 1076997 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:58:53.981787 1076997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
I0717 18:58:53.982388 1076997 config.go:182] Loaded profile config "functional-685960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:58:53.982525 1076997 config.go:182] Loaded profile config "functional-685960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:58:53.982912 1076997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:58:53.982974 1076997 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:58:53.997247 1076997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36827
I0717 18:58:53.997766 1076997 main.go:141] libmachine: () Calling .GetVersion
I0717 18:58:53.998660 1076997 main.go:141] libmachine: Using API Version  1
I0717 18:58:53.998708 1076997 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:58:53.999086 1076997 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:58:53.999308 1076997 main.go:141] libmachine: (functional-685960) Calling .GetState
I0717 18:58:54.001519 1076997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:58:54.001577 1076997 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:58:54.016020 1076997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40153
I0717 18:58:54.016486 1076997 main.go:141] libmachine: () Calling .GetVersion
I0717 18:58:54.016948 1076997 main.go:141] libmachine: Using API Version  1
I0717 18:58:54.016972 1076997 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:58:54.017305 1076997 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:58:54.017487 1076997 main.go:141] libmachine: (functional-685960) Calling .DriverName
I0717 18:58:54.017749 1076997 ssh_runner.go:195] Run: systemctl --version
I0717 18:58:54.017779 1076997 main.go:141] libmachine: (functional-685960) Calling .GetSSHHostname
I0717 18:58:54.020989 1076997 main.go:141] libmachine: (functional-685960) DBG | domain functional-685960 has defined MAC address 52:54:00:72:3b:56 in network mk-functional-685960
I0717 18:58:54.021451 1076997 main.go:141] libmachine: (functional-685960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:3b:56", ip: ""} in network mk-functional-685960: {Iface:virbr1 ExpiryTime:2023-07-17 19:53:05 +0000 UTC Type:0 Mac:52:54:00:72:3b:56 Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:functional-685960 Clientid:01:52:54:00:72:3b:56}
I0717 18:58:54.021862 1076997 main.go:141] libmachine: (functional-685960) DBG | domain functional-685960 has defined IP address 192.168.50.154 and MAC address 52:54:00:72:3b:56 in network mk-functional-685960
I0717 18:58:54.021926 1076997 main.go:141] libmachine: (functional-685960) Calling .GetSSHPort
I0717 18:58:54.022285 1076997 main.go:141] libmachine: (functional-685960) Calling .GetSSHKeyPath
I0717 18:58:54.022521 1076997 main.go:141] libmachine: (functional-685960) Calling .GetSSHUsername
I0717 18:58:54.022685 1076997 sshutil.go:53] new ssh client: &{IP:192.168.50.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/functional-685960/id_rsa Username:docker}
I0717 18:58:54.113106 1076997 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 18:58:54.152376 1076997 main.go:141] libmachine: Making call to close driver server
I0717 18:58:54.152404 1076997 main.go:141] libmachine: (functional-685960) Calling .Close
I0717 18:58:54.152717 1076997 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:58:54.152762 1076997 main.go:141] libmachine: (functional-685960) DBG | Closing plugin on server side
I0717 18:58:54.152783 1076997 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 18:58:54.152796 1076997 main.go:141] libmachine: Making call to close driver server
I0717 18:58:54.152806 1076997 main.go:141] libmachine: (functional-685960) Calling .Close
I0717 18:58:54.153239 1076997 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:58:54.153257 1076997 main.go:141] libmachine: (functional-685960) DBG | Closing plugin on server side
I0717 18:58:54.153265 1076997 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-685960 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-685960"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"2be84dd57
5ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0","repoDigests":["docker.io/library/mysql@sha256:03b6dcedf5a2754da00e119e2cc6094ed3c884ad36b67bb25fe67be4b4f9bdb1","docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde"],"repoTags":["docker.io/library/mysql:5.7"],"size":"588268197"},{"id":"021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda","repoDigests":["docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef","docker.io/library/nginx@sha256:1bb5c4b86cb7c1e9f0209611dc2135d8a2c1c3a6436163970c99193787d067ea"],"repoTags":["docker.io/library/nginx:latest"],"size":"191044865"},{"id":"476b4809e33d05749529c2fb99759e7939f98f64eb75c3e3593dad8e5abefa89","repoDigests":["localhost/minikube-local-cache-test@sha256:06064ece3313f5511a58c91ebe93f7219255e8d19420f2b7e061a512d686db00"],"repoTags":["localhost/minikube-local-cache-test:functional-685960"],"size":"3345"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd
561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082","registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"59811126"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.i
o/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTa
gs":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83","registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"297083935"},{"id":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb","registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"122065872"},{"id":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e","registry.k8s.io/kube-controller-manage
r@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"113919286"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":["registry.k8
s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f","registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"72713623"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-685960 image ls --format json --alsologtostderr:
I0717 18:58:53.705212 1076921 out.go:296] Setting OutFile to fd 1 ...
I0717 18:58:53.705327 1076921 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:58:53.705339 1076921 out.go:309] Setting ErrFile to fd 2...
I0717 18:58:53.705346 1076921 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:58:53.705715 1076921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
I0717 18:58:53.706472 1076921 config.go:182] Loaded profile config "functional-685960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:58:53.706617 1076921 config.go:182] Loaded profile config "functional-685960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:58:53.706959 1076921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:58:53.707005 1076921 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:58:53.722486 1076921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45243
I0717 18:58:53.723032 1076921 main.go:141] libmachine: () Calling .GetVersion
I0717 18:58:53.723768 1076921 main.go:141] libmachine: Using API Version  1
I0717 18:58:53.723794 1076921 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:58:53.724214 1076921 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:58:53.724443 1076921 main.go:141] libmachine: (functional-685960) Calling .GetState
I0717 18:58:53.726636 1076921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:58:53.726686 1076921 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:58:53.741792 1076921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44143
I0717 18:58:53.742287 1076921 main.go:141] libmachine: () Calling .GetVersion
I0717 18:58:53.742845 1076921 main.go:141] libmachine: Using API Version  1
I0717 18:58:53.742874 1076921 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:58:53.743336 1076921 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:58:53.743509 1076921 main.go:141] libmachine: (functional-685960) Calling .DriverName
I0717 18:58:53.743737 1076921 ssh_runner.go:195] Run: systemctl --version
I0717 18:58:53.743773 1076921 main.go:141] libmachine: (functional-685960) Calling .GetSSHHostname
I0717 18:58:53.747578 1076921 main.go:141] libmachine: (functional-685960) DBG | domain functional-685960 has defined MAC address 52:54:00:72:3b:56 in network mk-functional-685960
I0717 18:58:53.748049 1076921 main.go:141] libmachine: (functional-685960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:3b:56", ip: ""} in network mk-functional-685960: {Iface:virbr1 ExpiryTime:2023-07-17 19:53:05 +0000 UTC Type:0 Mac:52:54:00:72:3b:56 Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:functional-685960 Clientid:01:52:54:00:72:3b:56}
I0717 18:58:53.748084 1076921 main.go:141] libmachine: (functional-685960) DBG | domain functional-685960 has defined IP address 192.168.50.154 and MAC address 52:54:00:72:3b:56 in network mk-functional-685960
I0717 18:58:53.748243 1076921 main.go:141] libmachine: (functional-685960) Calling .GetSSHPort
I0717 18:58:53.748398 1076921 main.go:141] libmachine: (functional-685960) Calling .GetSSHKeyPath
I0717 18:58:53.748496 1076921 main.go:141] libmachine: (functional-685960) Calling .GetSSHUsername
I0717 18:58:53.748615 1076921 sshutil.go:53] new ssh client: &{IP:192.168.50.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/functional-685960/id_rsa Username:docker}
I0717 18:58:53.857131 1076921 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 18:58:53.923593 1076921 main.go:141] libmachine: Making call to close driver server
I0717 18:58:53.923606 1076921 main.go:141] libmachine: (functional-685960) Calling .Close
I0717 18:58:53.924057 1076921 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:58:53.924107 1076921 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 18:58:53.924118 1076921 main.go:141] libmachine: Making call to close driver server
I0717 18:58:53.924131 1076921 main.go:141] libmachine: (functional-685960) Calling .Close
I0717 18:58:53.924387 1076921 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:58:53.924408 1076921 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-685960 image ls --format yaml --alsologtostderr:
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda
repoDigests:
- docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef
- docker.io/library/nginx@sha256:1bb5c4b86cb7c1e9f0209611dc2135d8a2c1c3a6436163970c99193787d067ea
repoTags:
- docker.io/library/nginx:latest
size: "191044865"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "122065872"
- id: 7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
- registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "113919286"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0
repoDigests:
- docker.io/library/mysql@sha256:03b6dcedf5a2754da00e119e2cc6094ed3c884ad36b67bb25fe67be4b4f9bdb1
- docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde
repoTags:
- docker.io/library/mysql:5.7
size: "588268197"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-685960
size: "34114467"
- id: 476b4809e33d05749529c2fb99759e7939f98f64eb75c3e3593dad8e5abefa89
repoDigests:
- localhost/minikube-local-cache-test@sha256:06064ece3313f5511a58c91ebe93f7219255e8d19420f2b7e061a512d686db00
repoTags:
- localhost/minikube-local-cache-test:functional-685960
size: "3345"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
- registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "297083935"
- id: 5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests:
- registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "72713623"
- id: 41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "59811126"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-685960 image ls --format yaml --alsologtostderr:
I0717 18:58:53.704067 1076920 out.go:296] Setting OutFile to fd 1 ...
I0717 18:58:53.704247 1076920 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:58:53.704259 1076920 out.go:309] Setting ErrFile to fd 2...
I0717 18:58:53.704266 1076920 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:58:53.704561 1076920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
I0717 18:58:53.705393 1076920 config.go:182] Loaded profile config "functional-685960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:58:53.705540 1076920 config.go:182] Loaded profile config "functional-685960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:58:53.706111 1076920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:58:53.706184 1076920 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:58:53.722719 1076920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45837
I0717 18:58:53.723408 1076920 main.go:141] libmachine: () Calling .GetVersion
I0717 18:58:53.724070 1076920 main.go:141] libmachine: Using API Version  1
I0717 18:58:53.724099 1076920 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:58:53.724456 1076920 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:58:53.724689 1076920 main.go:141] libmachine: (functional-685960) Calling .GetState
I0717 18:58:53.726973 1076920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:58:53.727035 1076920 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:58:53.742263 1076920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46453
I0717 18:58:53.742658 1076920 main.go:141] libmachine: () Calling .GetVersion
I0717 18:58:53.743823 1076920 main.go:141] libmachine: Using API Version  1
I0717 18:58:53.743854 1076920 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:58:53.744195 1076920 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:58:53.744481 1076920 main.go:141] libmachine: (functional-685960) Calling .DriverName
I0717 18:58:53.744759 1076920 ssh_runner.go:195] Run: systemctl --version
I0717 18:58:53.744787 1076920 main.go:141] libmachine: (functional-685960) Calling .GetSSHHostname
I0717 18:58:53.749037 1076920 main.go:141] libmachine: (functional-685960) DBG | domain functional-685960 has defined MAC address 52:54:00:72:3b:56 in network mk-functional-685960
I0717 18:58:53.749395 1076920 main.go:141] libmachine: (functional-685960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:3b:56", ip: ""} in network mk-functional-685960: {Iface:virbr1 ExpiryTime:2023-07-17 19:53:05 +0000 UTC Type:0 Mac:52:54:00:72:3b:56 Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:functional-685960 Clientid:01:52:54:00:72:3b:56}
I0717 18:58:53.749422 1076920 main.go:141] libmachine: (functional-685960) DBG | domain functional-685960 has defined IP address 192.168.50.154 and MAC address 52:54:00:72:3b:56 in network mk-functional-685960
I0717 18:58:53.749544 1076920 main.go:141] libmachine: (functional-685960) Calling .GetSSHPort
I0717 18:58:53.749731 1076920 main.go:141] libmachine: (functional-685960) Calling .GetSSHKeyPath
I0717 18:58:53.749883 1076920 main.go:141] libmachine: (functional-685960) Calling .GetSSHUsername
I0717 18:58:53.750021 1076920 sshutil.go:53] new ssh client: &{IP:192.168.50.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/functional-685960/id_rsa Username:docker}
I0717 18:58:53.872149 1076920 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 18:58:53.939296 1076920 main.go:141] libmachine: Making call to close driver server
I0717 18:58:53.939315 1076920 main.go:141] libmachine: (functional-685960) Calling .Close
I0717 18:58:53.939635 1076920 main.go:141] libmachine: (functional-685960) DBG | Closing plugin on server side
I0717 18:58:53.939686 1076920 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:58:53.939696 1076920 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 18:58:53.939706 1076920 main.go:141] libmachine: Making call to close driver server
I0717 18:58:53.939714 1076920 main.go:141] libmachine: (functional-685960) Calling .Close
I0717 18:58:53.940026 1076920 main.go:141] libmachine: (functional-685960) DBG | Closing plugin on server side
I0717 18:58:53.940130 1076920 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:58:53.940167 1076920 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-685960 ssh pgrep buildkitd: exit status 1 (211.43656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image build -t localhost/my-image:functional-685960 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 image build -t localhost/my-image:functional-685960 testdata/build --alsologtostderr: (2.340678628s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-685960 image build -t localhost/my-image:functional-685960 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4c09f81920d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-685960
--> 33405418590
Successfully tagged localhost/my-image:functional-685960
334054185902f6b7f1bfb6b8447d12760a740ca0682120146f4b8de02dcbc5de
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-685960 image build -t localhost/my-image:functional-685960 testdata/build --alsologtostderr:
I0717 18:58:54.164493 1077039 out.go:296] Setting OutFile to fd 1 ...
I0717 18:58:54.164684 1077039 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:58:54.164696 1077039 out.go:309] Setting ErrFile to fd 2...
I0717 18:58:54.164703 1077039 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:58:54.164945 1077039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
I0717 18:58:54.165595 1077039 config.go:182] Loaded profile config "functional-685960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:58:54.166195 1077039 config.go:182] Loaded profile config "functional-685960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:58:54.166595 1077039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:58:54.166639 1077039 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:58:54.182115 1077039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40251
I0717 18:58:54.182614 1077039 main.go:141] libmachine: () Calling .GetVersion
I0717 18:58:54.183334 1077039 main.go:141] libmachine: Using API Version  1
I0717 18:58:54.183383 1077039 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:58:54.183781 1077039 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:58:54.184002 1077039 main.go:141] libmachine: (functional-685960) Calling .GetState
I0717 18:58:54.185998 1077039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:58:54.186051 1077039 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:58:54.201309 1077039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
I0717 18:58:54.201809 1077039 main.go:141] libmachine: () Calling .GetVersion
I0717 18:58:54.202415 1077039 main.go:141] libmachine: Using API Version  1
I0717 18:58:54.202446 1077039 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:58:54.202843 1077039 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:58:54.203150 1077039 main.go:141] libmachine: (functional-685960) Calling .DriverName
I0717 18:58:54.203390 1077039 ssh_runner.go:195] Run: systemctl --version
I0717 18:58:54.203418 1077039 main.go:141] libmachine: (functional-685960) Calling .GetSSHHostname
I0717 18:58:54.206557 1077039 main.go:141] libmachine: (functional-685960) DBG | domain functional-685960 has defined MAC address 52:54:00:72:3b:56 in network mk-functional-685960
I0717 18:58:54.207069 1077039 main.go:141] libmachine: (functional-685960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:3b:56", ip: ""} in network mk-functional-685960: {Iface:virbr1 ExpiryTime:2023-07-17 19:53:05 +0000 UTC Type:0 Mac:52:54:00:72:3b:56 Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:functional-685960 Clientid:01:52:54:00:72:3b:56}
I0717 18:58:54.207106 1077039 main.go:141] libmachine: (functional-685960) DBG | domain functional-685960 has defined IP address 192.168.50.154 and MAC address 52:54:00:72:3b:56 in network mk-functional-685960
I0717 18:58:54.207390 1077039 main.go:141] libmachine: (functional-685960) Calling .GetSSHPort
I0717 18:58:54.207618 1077039 main.go:141] libmachine: (functional-685960) Calling .GetSSHKeyPath
I0717 18:58:54.207787 1077039 main.go:141] libmachine: (functional-685960) Calling .GetSSHUsername
I0717 18:58:54.207999 1077039 sshutil.go:53] new ssh client: &{IP:192.168.50.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/functional-685960/id_rsa Username:docker}
I0717 18:58:54.298208 1077039 build_images.go:151] Building image from path: /tmp/build.3622055055.tar
I0717 18:58:54.298301 1077039 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 18:58:54.310257 1077039 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3622055055.tar
I0717 18:58:54.316731 1077039 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3622055055.tar: stat -c "%s %y" /var/lib/minikube/build/build.3622055055.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3622055055.tar': No such file or directory
I0717 18:58:54.316781 1077039 ssh_runner.go:362] scp /tmp/build.3622055055.tar --> /var/lib/minikube/build/build.3622055055.tar (3072 bytes)
I0717 18:58:54.342283 1077039 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3622055055
I0717 18:58:54.354011 1077039 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3622055055 -xf /var/lib/minikube/build/build.3622055055.tar
I0717 18:58:54.363262 1077039 crio.go:297] Building image: /var/lib/minikube/build/build.3622055055
I0717 18:58:54.363359 1077039 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-685960 /var/lib/minikube/build/build.3622055055 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0717 18:58:56.422406 1077039 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-685960 /var/lib/minikube/build/build.3622055055 --cgroup-manager=cgroupfs: (2.059016041s)
I0717 18:58:56.422483 1077039 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3622055055
I0717 18:58:56.433736 1077039 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3622055055.tar
I0717 18:58:56.449172 1077039 build_images.go:207] Built localhost/my-image:functional-685960 from /tmp/build.3622055055.tar
I0717 18:58:56.449211 1077039 build_images.go:123] succeeded building to: functional-685960
I0717 18:58:56.449216 1077039 build_images.go:124] failed building to: 
I0717 18:58:56.449277 1077039 main.go:141] libmachine: Making call to close driver server
I0717 18:58:56.449291 1077039 main.go:141] libmachine: (functional-685960) Calling .Close
I0717 18:58:56.449641 1077039 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:58:56.449662 1077039 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 18:58:56.449672 1077039 main.go:141] libmachine: Making call to close driver server
I0717 18:58:56.449681 1077039 main.go:141] libmachine: (functional-685960) Calling .Close
I0717 18:58:56.449682 1077039 main.go:141] libmachine: (functional-685960) DBG | Closing plugin on server side
I0717 18:58:56.449944 1077039 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:58:56.449963 1077039 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.085086715s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-685960
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 update-context --alsologtostderr -v=2
2023/07/17 18:58:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image load --daemon gcr.io/google-containers/addon-resizer:functional-685960 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 image load --daemon gcr.io/google-containers/addon-resizer:functional-685960 --alsologtostderr: (6.634143107s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image load --daemon gcr.io/google-containers/addon-resizer:functional-685960 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 image load --daemon gcr.io/google-containers/addon-resizer:functional-685960 --alsologtostderr: (5.6597999s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-685960
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image load --daemon gcr.io/google-containers/addon-resizer:functional-685960 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 image load --daemon gcr.io/google-containers/addon-resizer:functional-685960 --alsologtostderr: (9.116067045s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image save gcr.io/google-containers/addon-resizer:functional-685960 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
E0717 18:58:43.981029 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 image save gcr.io/google-containers/addon-resizer:functional-685960 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (5.451391278s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image rm gcr.io/google-containers/addon-resizer:functional-685960 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 image ls: (1.222352475s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.046184974s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-685960
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-685960 image save --daemon gcr.io/google-containers/addon-resizer:functional-685960 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-685960 image save --daemon gcr.io/google-containers/addon-resizer:functional-685960 --alsologtostderr: (2.796035685s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-685960
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.84s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-685960
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-685960
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-685960
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (111.05s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-946642 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-946642 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m51.046454293s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (111.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.7s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-946642 addons enable ingress --alsologtostderr -v=5
E0717 19:01:00.134385 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-946642 addons enable ingress --alsologtostderr -v=5: (13.703760308s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.70s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.7s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-946642 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (100.62s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-282035 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0717 19:04:23.255268 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-282035 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m40.621606108s)
--- PASS: TestJSONOutput/start/Command (100.62s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-282035 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-282035 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (17.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-282035 --output=json --user=testUser
E0717 19:05:45.176923 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-282035 --output=json --user=testUser: (17.121096527s)
--- PASS: TestJSONOutput/stop/Command (17.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-664697 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-664697 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.684734ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"542a1dea-9fe3-43ef-9fba-4de5bc4f8b03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-664697] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9446214-6911-4b4f-a12f-da63276d4a72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16890"}}
	{"specversion":"1.0","id":"d0b46cc4-c4bd-4c62-a86b-368f201f7cda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a1d91df2-ab25-4627-801d-ac3a3c194b9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig"}}
	{"specversion":"1.0","id":"c2c07503-231c-4963-9df2-f6fd0ab8c52d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube"}}
	{"specversion":"1.0","id":"bace1087-005f-4805-9b1a-729473152abb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cd096272-ddd7-44ee-a03f-d30f6c12ddda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b54703fb-9854-4163-a0f6-7a486ce732e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-664697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-664697
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (101.56s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-934038 --driver=kvm2  --container-runtime=crio
E0717 19:06:00.134188 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 19:06:03.521267 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:06:03.526669 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:06:03.537037 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:06:03.557429 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:06:03.597827 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:06:03.678269 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:06:03.838817 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:06:04.159251 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:06:04.800302 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:06:06.080981 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:06:08.642842 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:06:13.764100 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:06:24.004357 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-934038 --driver=kvm2  --container-runtime=crio: (49.012304548s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-937271 --driver=kvm2  --container-runtime=crio
E0717 19:06:44.484926 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:07:25.446698 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-937271 --driver=kvm2  --container-runtime=crio: (49.629477584s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-934038
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-937271
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-937271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-937271
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-937271: (1.018676164s)
helpers_test.go:175: Cleaning up "first-934038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-934038
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-934038: (1.017501475s)
--- PASS: TestMinikubeProfile (101.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-773422 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0717 19:08:01.331254 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-773422 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.956817109s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-773422 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-773422 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-795139 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0717 19:08:29.017259 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-795139 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.826314713s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-795139 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-795139 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-773422 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-795139 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-795139 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-795139
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-795139: (1.231009408s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.68s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-795139
E0717 19:08:47.367958 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-795139: (21.680041719s)
--- PASS: TestMountStart/serial/RestartStopped (22.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-795139 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-795139 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (117.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-464644 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 19:11:00.134133 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-464644 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m56.70364965s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (117.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- rollout status deployment/busybox
E0717 19:11:03.520111 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-464644 -- rollout status deployment/busybox: (3.122996429s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- exec busybox-67b7f59bb-bjpl2 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- exec busybox-67b7f59bb-jgj4t -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- exec busybox-67b7f59bb-bjpl2 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- exec busybox-67b7f59bb-jgj4t -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- exec busybox-67b7f59bb-bjpl2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-464644 -- exec busybox-67b7f59bb-jgj4t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-464644 -v 3 --alsologtostderr
E0717 19:11:31.208189 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-464644 -v 3 --alsologtostderr: (40.514640498s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.15s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 cp testdata/cp-test.txt multinode-464644:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 cp multinode-464644:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile291099792/001/cp-test_multinode-464644.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 cp multinode-464644:/home/docker/cp-test.txt multinode-464644-m02:/home/docker/cp-test_multinode-464644_multinode-464644-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644-m02 "sudo cat /home/docker/cp-test_multinode-464644_multinode-464644-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 cp multinode-464644:/home/docker/cp-test.txt multinode-464644-m03:/home/docker/cp-test_multinode-464644_multinode-464644-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644-m03 "sudo cat /home/docker/cp-test_multinode-464644_multinode-464644-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 cp testdata/cp-test.txt multinode-464644-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 cp multinode-464644-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile291099792/001/cp-test_multinode-464644-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 cp multinode-464644-m02:/home/docker/cp-test.txt multinode-464644:/home/docker/cp-test_multinode-464644-m02_multinode-464644.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644 "sudo cat /home/docker/cp-test_multinode-464644-m02_multinode-464644.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 cp multinode-464644-m02:/home/docker/cp-test.txt multinode-464644-m03:/home/docker/cp-test_multinode-464644-m02_multinode-464644-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644-m03 "sudo cat /home/docker/cp-test_multinode-464644-m02_multinode-464644-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 cp testdata/cp-test.txt multinode-464644-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 cp multinode-464644-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile291099792/001/cp-test_multinode-464644-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 cp multinode-464644-m03:/home/docker/cp-test.txt multinode-464644:/home/docker/cp-test_multinode-464644-m03_multinode-464644.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644 "sudo cat /home/docker/cp-test_multinode-464644-m03_multinode-464644.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 cp multinode-464644-m03:/home/docker/cp-test.txt multinode-464644-m02:/home/docker/cp-test_multinode-464644-m03_multinode-464644-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 ssh -n multinode-464644-m02 "sudo cat /home/docker/cp-test_multinode-464644-m03_multinode-464644-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-464644 node stop m03: (2.089409502s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-464644 status: exit status 7 (452.979897ms)

                                                
                                                
-- stdout --
	multinode-464644
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-464644-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-464644-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-464644 status --alsologtostderr: exit status 7 (458.168623ms)

                                                
                                                
-- stdout --
	multinode-464644
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-464644-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-464644-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:12:01.416325 1083988 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:12:01.416450 1083988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:12:01.416455 1083988 out.go:309] Setting ErrFile to fd 2...
	I0717 19:12:01.416459 1083988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:12:01.416678 1083988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:12:01.416867 1083988 out.go:303] Setting JSON to false
	I0717 19:12:01.416902 1083988 mustload.go:65] Loading cluster: multinode-464644
	I0717 19:12:01.416954 1083988 notify.go:220] Checking for updates...
	I0717 19:12:01.417312 1083988 config.go:182] Loaded profile config "multinode-464644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:12:01.417330 1083988 status.go:255] checking status of multinode-464644 ...
	I0717 19:12:01.417754 1083988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:12:01.417837 1083988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:12:01.437595 1083988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44305
	I0717 19:12:01.438076 1083988 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:12:01.438816 1083988 main.go:141] libmachine: Using API Version  1
	I0717 19:12:01.438840 1083988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:12:01.439328 1083988 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:12:01.439632 1083988 main.go:141] libmachine: (multinode-464644) Calling .GetState
	I0717 19:12:01.441509 1083988 status.go:330] multinode-464644 host status = "Running" (err=<nil>)
	I0717 19:12:01.441546 1083988 host.go:66] Checking if "multinode-464644" exists ...
	I0717 19:12:01.441890 1083988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:12:01.441941 1083988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:12:01.457927 1083988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I0717 19:12:01.458436 1083988 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:12:01.459008 1083988 main.go:141] libmachine: Using API Version  1
	I0717 19:12:01.459032 1083988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:12:01.459471 1083988 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:12:01.459715 1083988 main.go:141] libmachine: (multinode-464644) Calling .GetIP
	I0717 19:12:01.462897 1083988 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:12:01.463404 1083988 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:12:01.463430 1083988 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:12:01.463625 1083988 host.go:66] Checking if "multinode-464644" exists ...
	I0717 19:12:01.463938 1083988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:12:01.463988 1083988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:12:01.480596 1083988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38131
	I0717 19:12:01.481154 1083988 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:12:01.481785 1083988 main.go:141] libmachine: Using API Version  1
	I0717 19:12:01.481812 1083988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:12:01.482173 1083988 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:12:01.482394 1083988 main.go:141] libmachine: (multinode-464644) Calling .DriverName
	I0717 19:12:01.482651 1083988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:12:01.482693 1083988 main.go:141] libmachine: (multinode-464644) Calling .GetSSHHostname
	I0717 19:12:01.486447 1083988 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:12:01.487068 1083988 main.go:141] libmachine: (multinode-464644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:06:f6", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:09:20 +0000 UTC Type:0 Mac:52:54:00:7b:06:f6 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-464644 Clientid:01:52:54:00:7b:06:f6}
	I0717 19:12:01.487097 1083988 main.go:141] libmachine: (multinode-464644) DBG | domain multinode-464644 has defined IP address 192.168.39.174 and MAC address 52:54:00:7b:06:f6 in network mk-multinode-464644
	I0717 19:12:01.487298 1083988 main.go:141] libmachine: (multinode-464644) Calling .GetSSHPort
	I0717 19:12:01.487513 1083988 main.go:141] libmachine: (multinode-464644) Calling .GetSSHKeyPath
	I0717 19:12:01.487672 1083988 main.go:141] libmachine: (multinode-464644) Calling .GetSSHUsername
	I0717 19:12:01.487813 1083988 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644/id_rsa Username:docker}
	I0717 19:12:01.583148 1083988 ssh_runner.go:195] Run: systemctl --version
	I0717 19:12:01.588818 1083988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:12:01.602925 1083988 kubeconfig.go:92] found "multinode-464644" server: "https://192.168.39.174:8443"
	I0717 19:12:01.602959 1083988 api_server.go:166] Checking apiserver status ...
	I0717 19:12:01.602995 1083988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:12:01.616231 1083988 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1090/cgroup
	I0717 19:12:01.626240 1083988 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podb280034e13df00701aec7afc575fcc6c/crio-5ff68c0a594cf76b1e9ad2ecf972dfab0dd4b2c215658b9176f7fc1b416b4ece"
	I0717 19:12:01.626319 1083988 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb280034e13df00701aec7afc575fcc6c/crio-5ff68c0a594cf76b1e9ad2ecf972dfab0dd4b2c215658b9176f7fc1b416b4ece/freezer.state
	I0717 19:12:01.636716 1083988 api_server.go:204] freezer state: "THAWED"
	I0717 19:12:01.636752 1083988 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0717 19:12:01.642269 1083988 api_server.go:279] https://192.168.39.174:8443/healthz returned 200:
	ok
	I0717 19:12:01.642305 1083988 status.go:421] multinode-464644 apiserver status = Running (err=<nil>)
	I0717 19:12:01.642316 1083988 status.go:257] multinode-464644 status: &{Name:multinode-464644 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 19:12:01.642333 1083988 status.go:255] checking status of multinode-464644-m02 ...
	I0717 19:12:01.642685 1083988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:12:01.642719 1083988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:12:01.658768 1083988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0717 19:12:01.659266 1083988 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:12:01.659781 1083988 main.go:141] libmachine: Using API Version  1
	I0717 19:12:01.659807 1083988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:12:01.660201 1083988 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:12:01.660416 1083988 main.go:141] libmachine: (multinode-464644-m02) Calling .GetState
	I0717 19:12:01.662510 1083988 status.go:330] multinode-464644-m02 host status = "Running" (err=<nil>)
	I0717 19:12:01.662537 1083988 host.go:66] Checking if "multinode-464644-m02" exists ...
	I0717 19:12:01.662887 1083988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:12:01.662921 1083988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:12:01.679867 1083988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44873
	I0717 19:12:01.680465 1083988 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:12:01.681086 1083988 main.go:141] libmachine: Using API Version  1
	I0717 19:12:01.681117 1083988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:12:01.681461 1083988 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:12:01.681674 1083988 main.go:141] libmachine: (multinode-464644-m02) Calling .GetIP
	I0717 19:12:01.684597 1083988 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:12:01.685094 1083988 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:12:01.685132 1083988 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:12:01.685264 1083988 host.go:66] Checking if "multinode-464644-m02" exists ...
	I0717 19:12:01.685597 1083988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:12:01.685642 1083988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:12:01.701356 1083988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0717 19:12:01.701884 1083988 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:12:01.702452 1083988 main.go:141] libmachine: Using API Version  1
	I0717 19:12:01.702486 1083988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:12:01.702836 1083988 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:12:01.703059 1083988 main.go:141] libmachine: (multinode-464644-m02) Calling .DriverName
	I0717 19:12:01.703253 1083988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:12:01.703278 1083988 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHHostname
	I0717 19:12:01.706183 1083988 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:12:01.706763 1083988 main.go:141] libmachine: (multinode-464644-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:46:84", ip: ""} in network mk-multinode-464644: {Iface:virbr1 ExpiryTime:2023-07-17 20:10:34 +0000 UTC Type:0 Mac:52:54:00:2d:46:84 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-464644-m02 Clientid:01:52:54:00:2d:46:84}
	I0717 19:12:01.706797 1083988 main.go:141] libmachine: (multinode-464644-m02) DBG | domain multinode-464644-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:2d:46:84 in network mk-multinode-464644
	I0717 19:12:01.707005 1083988 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHPort
	I0717 19:12:01.707183 1083988 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHKeyPath
	I0717 19:12:01.707354 1083988 main.go:141] libmachine: (multinode-464644-m02) Calling .GetSSHUsername
	I0717 19:12:01.707485 1083988 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16890-1061725/.minikube/machines/multinode-464644-m02/id_rsa Username:docker}
	I0717 19:12:01.793048 1083988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:12:01.807496 1083988 status.go:257] multinode-464644-m02 status: &{Name:multinode-464644-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 19:12:01.807557 1083988 status.go:255] checking status of multinode-464644-m03 ...
	I0717 19:12:01.807983 1083988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:12:01.808025 1083988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:12:01.824313 1083988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0717 19:12:01.824764 1083988 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:12:01.825360 1083988 main.go:141] libmachine: Using API Version  1
	I0717 19:12:01.825384 1083988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:12:01.825819 1083988 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:12:01.826089 1083988 main.go:141] libmachine: (multinode-464644-m03) Calling .GetState
	I0717 19:12:01.827773 1083988 status.go:330] multinode-464644-m03 host status = "Stopped" (err=<nil>)
	I0717 19:12:01.827789 1083988 status.go:343] host is not running, skipping remaining checks
	I0717 19:12:01.827795 1083988 status.go:257] multinode-464644-m03 status: &{Name:multinode-464644-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (33.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 node start m03 --alsologtostderr
E0717 19:12:23.181923 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-464644 node start m03 --alsologtostderr: (32.778317951s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (33.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-464644 node delete m03: (1.301558742s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (447.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-464644 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 19:28:01.330330 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 19:29:03.183306 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 19:31:00.133836 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 19:31:03.520759 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 19:33:01.330338 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-464644 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m26.716912757s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-464644 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (447.31s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-464644
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-464644-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-464644-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.412718ms)

                                                
                                                
-- stdout --
	* [multinode-464644-m02] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-464644-m02' is duplicated with machine name 'multinode-464644-m02' in profile 'multinode-464644'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-464644-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-464644-m03 --driver=kvm2  --container-runtime=crio: (49.297926339s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-464644
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-464644: exit status 80 (249.319926ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-464644
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-464644-m03 already exists in multinode-464644-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-464644-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-464644-m03: (1.044107743s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.71s)

                                                
                                    
x
+
TestScheduledStopUnix (120.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-119343 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-119343 --memory=2048 --driver=kvm2  --container-runtime=crio: (49.072702697s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-119343 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-119343 -n scheduled-stop-119343
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-119343 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-119343 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-119343 -n scheduled-stop-119343
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-119343
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-119343 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0717 19:41:00.134363 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 19:41:03.521359 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-119343
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-119343: exit status 7 (64.604407ms)

                                                
                                                
-- stdout --
	scheduled-stop-119343
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-119343 -n scheduled-stop-119343
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-119343 -n scheduled-stop-119343: exit status 7 (62.447812ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-119343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-119343
--- PASS: TestScheduledStopUnix (120.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (234.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-852374 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-852374 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m5.912806932s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-852374
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-852374: (2.219641004s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-852374 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-852374 status --format={{.Host}}: exit status 7 (82.617722ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-852374 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-852374 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.296518874s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-852374 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-852374 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-852374 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (295.817996ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-852374] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-852374
	    minikube start -p kubernetes-upgrade-852374 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8523742 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-852374 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-852374 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-852374 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.990229087s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-852374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-852374
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-852374: (1.270851466s)
--- PASS: TestKubernetesUpgrade (234.19s)

                                                
                                    
x
+
TestPause/serial/Start (94.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-882959 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-882959 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m34.883674999s)
--- PASS: TestPause/serial/Start (94.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-473350 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-473350 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (93.31939ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-473350] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (53.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-473350 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-473350 --driver=kvm2  --container-runtime=crio: (53.394528543s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-473350 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (53.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-395471 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-395471 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (118.416985ms)

                                                
                                                
-- stdout --
	* [false-395471] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:45:23.345141 1094937 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:45:23.345280 1094937 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:45:23.345289 1094937 out.go:309] Setting ErrFile to fd 2...
	I0717 19:45:23.345293 1094937 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:45:23.345494 1094937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1061725/.minikube/bin
	I0717 19:45:23.346173 1094937 out.go:303] Setting JSON to false
	I0717 19:45:23.347184 1094937 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16074,"bootTime":1689607049,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:45:23.347279 1094937 start.go:138] virtualization: kvm guest
	I0717 19:45:23.350552 1094937 out.go:177] * [false-395471] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:45:23.352721 1094937 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:45:23.352752 1094937 notify.go:220] Checking for updates...
	I0717 19:45:23.354846 1094937 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:45:23.358441 1094937 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1061725/kubeconfig
	I0717 19:45:23.360522 1094937 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1061725/.minikube
	I0717 19:45:23.362407 1094937 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:45:23.364209 1094937 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:45:23.366370 1094937 config.go:182] Loaded profile config "NoKubernetes-473350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:45:23.366465 1094937 config.go:182] Loaded profile config "running-upgrade-585114": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0717 19:45:23.366552 1094937 config.go:182] Loaded profile config "stopped-upgrade-983290": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0717 19:45:23.366647 1094937 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:45:23.408309 1094937 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 19:45:23.410249 1094937 start.go:298] selected driver: kvm2
	I0717 19:45:23.410272 1094937 start.go:880] validating driver "kvm2" against <nil>
	I0717 19:45:23.410285 1094937 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:45:23.412812 1094937 out.go:177] 
	W0717 19:45:23.414693 1094937 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0717 19:45:23.416586 1094937 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-395471 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-395471

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-395471

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-395471

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-395471

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-395471

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-395471

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-395471

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-395471

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-395471

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-395471

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-395471

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-395471" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-395471" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-395471

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-395471"

                                                
                                                
----------------------- debugLogs end: false-395471 [took: 3.045054406s] --------------------------------
helpers_test.go:175: Cleaning up "false-395471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-395471
--- PASS: TestNetworkPlugins/group/false (3.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (63.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-473350 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0717 19:46:00.133794 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 19:46:03.520614 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-473350 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m1.699627369s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-473350 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-473350 status -o json: exit status 2 (269.958057ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-473350","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-473350
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-473350: (1.05302449s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (63.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-983290
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (74.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-473350 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-473350 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m14.240223084s)
--- PASS: TestNoKubernetes/serial/Start (74.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (149.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-149000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0717 19:48:01.330963 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-149000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m29.785731344s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (149.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (149.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-408472 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-408472 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (2m29.932086551s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (149.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-473350 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-473350 "sudo systemctl is-active --quiet service kubelet": exit status 1 (219.939847ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-473350
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-473350: (1.306106944s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (73.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-473350 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-473350 --driver=kvm2  --container-runtime=crio: (1m13.631482182s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (73.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-473350 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-473350 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.583881ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (106.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-711413 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-711413 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (1m46.067224672s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (106.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-149000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1a943c6d-5e49-420d-9e7e-ccbfcb1605df] Pending
helpers_test.go:344: "busybox" [1a943c6d-5e49-420d-9e7e-ccbfcb1605df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1a943c6d-5e49-420d-9e7e-ccbfcb1605df] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.025973906s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-149000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-149000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-149000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.023842095s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-149000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (63.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-891260 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-891260 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (1m3.061087686s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (63.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-408472 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5c8e4faa-fb22-4e2f-a383-de7b5122346b] Pending
helpers_test.go:344: "busybox" [5c8e4faa-fb22-4e2f-a383-de7b5122346b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5c8e4faa-fb22-4e2f-a383-de7b5122346b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.034998105s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-408472 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-408472 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-408472 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.294206011s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-408472 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-711413 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [49340f84-ca4d-4b97-af9e-87640bf8f354] Pending
helpers_test.go:344: "busybox" [49340f84-ca4d-4b97-af9e-87640bf8f354] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [49340f84-ca4d-4b97-af9e-87640bf8f354] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.031302426s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-711413 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-711413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-711413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.275213897s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-711413 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-891260 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-891260 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.714107658s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-891260 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-891260 --alsologtostderr -v=3: (12.116374031s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-891260 -n newest-cni-891260
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-891260 -n newest-cni-891260: exit status 7 (64.213396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-891260 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-891260 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-891260 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (51.212626939s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-891260 -n newest-cni-891260
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-891260 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-891260 --alsologtostderr -v=1
E0717 19:52:44.379045 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-891260 -n newest-cni-891260
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-891260 -n newest-cni-891260: exit status 2 (249.116851ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-891260 -n newest-cni-891260
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-891260 -n newest-cni-891260: exit status 2 (248.535991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-891260 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-891260 -n newest-cni-891260
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-891260 -n newest-cni-891260
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (103.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-114855 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-114855 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (1m43.857693578s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (103.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (803.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-149000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-149000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (13m23.437833328s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-149000 -n old-k8s-version-149000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (803.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (596.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-408472 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-408472 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (9m56.230157564s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-408472 -n no-preload-408472
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (596.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (565.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-711413 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-711413 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (9m25.527898899s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-711413 -n default-k8s-diff-port-711413
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (565.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-114855 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e55efa6c-89f0-4aac-8379-ec8101713c23] Pending
helpers_test.go:344: "busybox" [e55efa6c-89f0-4aac-8379-ec8101713c23] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e55efa6c-89f0-4aac-8379-ec8101713c23] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.022035288s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-114855 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-114855 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-114855 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.249220227s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-114855 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (702.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-114855 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 19:58:01.330725 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
E0717 20:01:00.133614 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 20:01:03.519788 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 20:02:23.185334 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 20:03:01.331023 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/functional-685960/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-114855 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (11m42.346296135s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-114855 -n embed-certs-114855
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (702.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (104.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m44.019045574s)
--- PASS: TestNetworkPlugins/group/auto/Start (104.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (90.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0717 20:19:03.186652 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m30.037330533s)
--- PASS: TestNetworkPlugins/group/flannel/Start (90.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (112.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m52.306142272s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (112.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-395471 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-395471 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-stdjr" [01ae0a08-7324-47bc-9d9d-a64fe94c35cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-stdjr" [01ae0a08-7324-47bc-9d9d-a64fe94c35cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.012040866s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-t9kfh" [0c9ee151-5c1d-4626-936b-0bfe3846427c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.022613925s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-395471 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-395471 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-395471 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-fxf2p" [911d40fe-db73-488e-9458-a2238d4f56b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-fxf2p" [911d40fe-db73-488e-9458-a2238d4f56b3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.011205765s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-395471 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (106.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0717 20:20:19.625219 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
E0717 20:20:19.631199 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
E0717 20:20:19.641529 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
E0717 20:20:19.661936 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
E0717 20:20:19.703255 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
E0717 20:20:19.783801 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
E0717 20:20:19.944310 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
E0717 20:20:20.264519 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
E0717 20:20:20.905552 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
E0717 20:20:22.186684 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m46.741301381s)
--- PASS: TestNetworkPlugins/group/bridge/Start (106.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (112.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0717 20:20:29.868007 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
E0717 20:20:40.108463 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
E0717 20:20:45.503286 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
E0717 20:20:45.508623 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
E0717 20:20:45.519079 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
E0717 20:20:45.540237 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
E0717 20:20:45.581151 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
E0717 20:20:45.661630 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
E0717 20:20:45.822272 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
E0717 20:20:46.142711 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
E0717 20:20:46.871611 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
E0717 20:20:48.151815 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
E0717 20:20:50.712474 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
E0717 20:20:55.833535 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
E0717 20:21:00.134227 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/addons-962955/client.crt: no such file or directory
E0717 20:21:00.589738 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
E0717 20:21:03.520058 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/ingress-addon-legacy-946642/client.crt: no such file or directory
E0717 20:21:06.074222 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
E0717 20:21:20.423689 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
E0717 20:21:20.429019 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
E0717 20:21:20.439216 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
E0717 20:21:20.459588 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
E0717 20:21:20.500008 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
E0717 20:21:20.580443 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
E0717 20:21:20.741734 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
E0717 20:21:21.062397 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
E0717 20:21:21.702623 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
E0717 20:21:22.983689 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
E0717 20:21:25.543965 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
E0717 20:21:26.554802 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m52.715068165s)
--- PASS: TestNetworkPlugins/group/calico/Start (112.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-395471 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-395471 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-rjrkp" [40d196f0-fa4c-4c1f-a73f-6a6c4e800ac2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 20:21:30.664277 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-rjrkp" [40d196f0-fa4c-4c1f-a73f-6a6c4e800ac2] Running
E0717 20:21:40.904608 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
E0717 20:21:41.550298 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/old-k8s-version-149000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.010145039s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-395471 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-395471 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-395471 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-ljzkm" [93bb60d4-de08-4d63-a2b0-85512e5846dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-ljzkm" [93bb60d4-de08-4d63-a2b0-85512e5846dd] Running
E0717 20:22:07.515538 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.027506672s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0717 20:22:01.384847 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m17.389056489s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-395471 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5kfgd" [6bda12cf-7684-4ff0-b536-db87ef356ceb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.02727261s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-395471 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-395471 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-fvnq8" [21417713-a78b-4225-874d-ec0c946a4955] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-fvnq8" [21417713-a78b-4225-874d-ec0c946a4955] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.014275264s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (89.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-395471 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m29.556167933s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (89.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-395471 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-76rb9" [60a83e46-3800-4786-8160-0e3ffe3a73d4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.019058546s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-395471 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-395471 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hxwlg" [26ea107a-9c63-439c-a630-a003d2def492] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 20:23:29.436652 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/no-preload-408472/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-hxwlg" [26ea107a-9c63-439c-a630-a003d2def492] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.010960388s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-395471 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-395471 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-395471 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-c8n6s" [26a69e9f-10d6-4c67-b15c-265a1fb8972b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-c8n6s" [26a69e9f-10d6-4c67-b15c-265a1fb8972b] Running
E0717 20:24:04.266966 1068954 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1061725/.minikube/profiles/default-k8s-diff-port-711413/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.011015676s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-395471 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-395471 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    

Test skip (36/288)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.27.3/cached-images 0
13 TestDownloadOnly/v1.27.3/binaries 0
14 TestDownloadOnly/v1.27.3/kubectl 0
18 TestDownloadOnlyKic 0
29 TestAddons/parallel/Olm 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
107 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
108 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
109 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
110 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
111 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
112 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
113 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
114 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
146 TestImageBuild 0
179 TestKicCustomNetwork 0
180 TestKicExistingNetwork 0
181 TestKicCustomSubnet 0
182 TestKicStaticIP 0
213 TestChangeNoneUser 0
216 TestScheduledStopWindows 0
218 TestSkaffold 0
220 TestInsufficientStorage 0
224 TestMissingContainerUpgrade 0
235 TestStartStop/group/disable-driver-mounts 0.15
242 TestNetworkPlugins/group/kubenet 3.5
250 TestNetworkPlugins/group/cilium 3.54
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-178387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-178387
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-395471 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-395471

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-395471

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-395471

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-395471

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-395471

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-395471

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-395471

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-395471

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-395471

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-395471

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-395471

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-395471" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-395471" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-395471

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-395471"

                                                
                                                
----------------------- debugLogs end: kubenet-395471 [took: 3.345590364s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-395471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-395471
--- SKIP: TestNetworkPlugins/group/kubenet (3.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-395471 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-395471" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-395471

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-395471" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395471"

                                                
                                                
----------------------- debugLogs end: cilium-395471 [took: 3.391730399s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-395471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-395471
--- SKIP: TestNetworkPlugins/group/cilium (3.54s)

                                                
                                    
Copied to clipboard